Posts
Comments
"There are 729,500 single women my age in New York City. My picture and profile successfully filtered out 729,499 of them and left me with the one I was looking for."
I know this is sort of meant as a joke, but I feel like one of the more interesting questions that could be addressed in an analysis like this is what percentage of the women in the dating pool could you actually have had a successful relationship with. How strong is your filter and how strong does it need to be? There's a tension between trying to find/obtain the best of many possible good options, and trying to find the one of a handful of good options in a haystack of bad ones.
I'm somewhat amazed that you looked at 300 profiles, read 60 of them, and liked 20 of them enough to send them messages. Only 1 in 5 potential matches met your standards for appearance, but 1 in 3 met your standards based on what they wrote, and that's not even taking into account the difference in difficulty between reading a profile and composing a message.
You make a big deal about the number of people available online, but in your previous article on soccer players you implied that the average had a much larger effect on the tails than the average did. If you're really looking for mates in the tails of the distribution, and 1 in 729,500 is about 4.5 sigma event, then being involved in organizations whose members are much more like your ideal mate on average may be a better strategy than online dating.
- There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming. [pollid:1091]
- Human values are so fragile that it would require a superintelligence to capture them with anything close to adequate fidelity.[pollid:1092]
- Humans are capable of pre-digesting parts of the human values problem domain. [pollid:1093]
- Successful techniques for value discovery of non-humans, (e.g. artificial agents, non-human animals, human institutions) would meaningfully translate into tools for learning human values. [pollid:1094]
- Value learning isn't adequately being researched by commercial interests who want to use it to sell you things. [pollid:1095]
- Practice teaching non-superintelligent machines to respect human values will improve our ability to specify a Friendly utility function for any potential superintelligence.[pollid:1096]
- Something other than AI will cause human extinction sometime in the next 100 years.[pollid:1097]
- All other things being equal, an additional researcher working on value learning is more valuable than one working on corrigibility, Vingean reflection, or some other portion of the FAI problem. [pollid:1098]
Testing [pollid:1090]
I'm working through the udacity deep learning course right now, and I'm always trying to learn more things on the MIRI research guide. I'm in a fairly different timezone, but my schedule is pretty flexible. Maybe we can work something out?
This raises a really interesting point that I wanted to include in the top level post, but couldn't find a place for. It seems plausible/likely that human savants are implementing arithmetic using different, and much more efficient algorithms than those used by neurotypical humans. This was actually one of the examples I considered in support of the argument that neurons can't be the underlying reason humans struggle so much with math.
This is a really broad definition of math. There is regular structure in kinetic tasks like throwing a ball through a hoop. There's also regular structure in tasks like natural language processing. One way to describe that regular structure is through a mathematical representation of it, but I don't know that I consider basketball ability to be reliant on mathematical ability. Would you describe all forms of pattern matching as mathematical in nature? Is the fact that you can read and understand this sentence also evidence that you are good at math?
It's the average({4-2}/2), rather than the sum, since the altruistic agent is interested in maximizing the average utility.
The tribal limitations on altruism that you allude to are definitely one of the tendencies that much of our cultural advice on altruism targets. In many ways the expanding circle of trust, from individuals, to families, to tribes, to cities, to nation states, etc. has been one of the fundamental enablers of human civilization.
I'm less sure about the hard trade-off that you describe. I have a lot of experience being a member of small groups that have altruism towards non-group members as an explicit goal. In that scenario, helping strangers also helps in-group members achieve their goals. I don't think large-group altruism precludes you from belonging to small in-groups, since very few in-groups demand any sort of absolute loyalty. While full effort in-group altruism, including things like consciously developing new skills to better assist your other group members would absolutely represent a hard trade-off with altruism on a larger scale, people appear to be very capable of belonging to a large number of different in-groups.
This implies that the actual level of commitment required to be a part of most in-groups is rather low, and the socially normative level of altruism is even lower. Belonging to a close-knit in-group with a particularly needy member, (e.g. having a partially disabled parent, spouse, or child) may shift the calculus somewhat, but for most in-groups being a member in good-standing has relatively undemanding requirements. Examining my own motivations it seems that for many of the groups that I participate in most of the work that I do to fulfilling expectations and helping others within those group is more directly driven by my desire for social validation than my selfless perception of the intrinsic value of the other group members.
Fiction is written from inside the head of the characters. Fiction books are books about making choices, about taking actions and seeing how they play out, and the characters don't already know the answers when they're making their decisions. Fiction books often seem to most closely resemble the problems that I face in my life.
Books that have people succeed for the wrong reasons I can put down, but watching people make good choices over and over and over again seems like a really useful thing. Books are a really cheap way to get some of the intuitive advantages of additional life experience. You have to be a little careful to pick authors that don't teach you the wrong lessons, but in general I haven't found a lot of histories or biographies that really try to tackle the problem of what it's like to make choices from the inside in an adequate way. If you've read lots of historically accurate works that do manage to give easily digested advice on how to make good decisions, I'd love to see your reading list.
On a very basic level, I am an algorithm receiving a stream of sensory data.
So, do you trust that sensory data? You mention reality, presumably you allow that objective reality which generates the stream of your sensory data exists. If you test your models by sensory data, then that sensory data is your "facts" -- something that is your criterion for whether a model is good or not.
I am also not sure how do you deal with surprises. Does sensory data always wins over models? Or sometimes you'd be willing to say that you don't believe your own eyes?
I don't understand what you mean by trust. Trust has very little to do with it. I work within the model that the sensory data is meaningful, that life as I experience it is meaningful. It isn't obvious to me that either of those things are true any more than the parallel postulate is obvious to me. They are axioms.
If my eyes right now are saying something different than my eyes normally tell me, then I will tend to distrust my eyes right now in favor of believing what I remember my eyes telling me. I don't think that's the same as saying I don't believe my eyes.
group selection
When you said "more closely linked to genetic self-interest than to personal self-interest" did you mean the genetic self-interest of the entire species or did you mean something along the lines of Dawkins' Selfish Gene? I read you as arguing for interests of the population gene pool. If you are talking about selfish genes then I don't see any difference between "genetic self-interest" and "personal self-interest".
The idea of the genetic self-interest of an entire species is more or less incoherent. Genetic self-interest involves genes making more copies of themselves. Personal self-interest involves persons making decisions that they think will bring them happiness, utility, what have you. To reiterate my earlier statement "the ability of individual members of that species to plan in such a way as to maximize their own well-being."
is a series of appeals of to authority
Kinda, but the important thing is that you can go and check. In your worldview, how do you go and check yourself? Or are "streams of sensory data" sufficiently syncronised between everyone?
And I go look for review articles that support the quote that people care about social status. But if you don't consider expert opinion to be evidence, then you have to go back and reinvent human knowledge from the ground up every time you try and learn anything.
I can always go look for more related data if I have questions about a model. I can read more literature. I can make observations.
Fact just isn't an epistemological category that I have, and it's not one that I find useful. There are only models.
So how you choose between different models, then? If there are no facts, what are your criteria? Why is the model of lizard overlords ruling the Earth any worse than any other model?
You use expressions like "because it's always been true in the past", but what do you mean by "true"?
My primary criterion is consistency. On a very basic level, I am an algorithm receiving a stream of sensory data. I make models to predict what I think that sensory data will look like in the future based on regularities I detect/have detected in the past. Models that capture consistent features of the data go on to correctly control anticipation and are good models, but they're all models. The only thing I have in my head is the map. I don't have access to the territory.
And yet I believe with perfect sincerity that, in generals my maps correspond to reality. I call that correspondence truth. I don't understand the separation you seem to be attempting to make between facts and models or models and reality.
aspect of the climate system that consistently and frequently chnages between glacial and near-interglacial conditions in periods of less than a decade, and on occassion as rapidly as three years
I am not sure this interpretation of the data surivived -- see e.g. this:
Neat. Thanks.
The article you link seems to go out of its way to not be seen as challenging my basic claim, e.g. "Having said this, it should be reemphasised that ice-core chemistry does show extremely rapid changes during climate transitions. The reduction in [Ca] between stadial to interstadial conditions during D-O 3 in the GRIP ice-core occurred in two discrete steps totalling just 5 years [Fuhrer et al., 1999]."
Indeed it the success of the human species that I would cite as evidence for my assertion that human behavior is more closely linked to genetic self-interest than to personal self-interest. Cultural and social success is a huge factor in genetic self-interest.
I haven't been following the subject closely, but didn't the idea of group selection ran into significant difficulties? My impression is that nowadays it's not considered to be a major evolution mechanism, though I haven't looked carefully and will accept corrections.
I'm not sure how group selection is related to material you're quoting. Cultural success and social success refer to the success of an individual within a culture/society, not to the success of cultures and societies.
If you don't consider the opinions of experts evidence, what qualifies?
Opinions are not evidence, they are opinions. Argument to authority is, notably, a fallacy. I call things which qualify "facts".
I mean, it's sort of a fallacy. At the same time when I'm sick, I go to a doctor and get her medical opinion and treat it as evidence. I'm not an expert on the things that humans value. I don't have the time or energy to background to perform experiments and evaluate statistical and experimental methods. Even trusting peer review and relying on published literature is a series of appeals of to authority.
it's not obvious to me that children are a good investment
I think you're engaging in nirvana fallacy. Children are not a good investment compared to what?
Again -- let's take a medieval European peasant. He has no ability to accumulate capital because he's poor, because his lord will just take his money if he notices it, and because once in a while an army passes through and basically grabs everything that isn't nailed down. He doesn't have any apprentices because peasants don't have apprentices (and apprentices leave once they learn the craft, anyway). He certainly has friends, but even his friends will feed their family before him when the next famine comes. So, what kind of investments into a non-starving old age should he make?
He can buy jars of salt and bury them. His children, if they survive, may feed their own children rather than him in the next famine. A network of friends and a high standing in the community are at least as valuable to him as investing resources in birthing and raising children who probably won't see adulthood. He can become an active and respected member of the church. The church is probably a better bet overall since there's a decent chance his own kids will die, but the church will probably survive.
I'm not an expert on 14th century investment opportunities, I just find the idea that children are clearly the best selfish investment incredible. If children are such a good investment, why did we need a modest proposal? And why are the rich, who retirements are not in doubt, so desirous of children? Why does king Priam need 50 sons? He's the king of a city. What fears does he have about retirement?
I don't know that we have access to facts. Everything is interpreted. Everything is a model.
OK. There were 3,932,181 births in the US in 2013 giving the birth rate of 12.4 / 1000 population (source). Tell me what kind of model is that, which theory does this piece of information critically depends on.
The ones digit of that number is almost certainly wrong and I'm not particularly confident about the next two. Believing that number relies on an enormous number of assumptions about the bureaucracy that generated it. Now my model of the world tells me that the bureaucratic system that calculates the birth rate in the U.S. is fairly trustworthy, compared to say the system that manages elections in Russia, but that trust is totally a function of my model of the world. The data you gather depends on your methodology. Some methods may be better established and may have more evidence in support of them, and the data they gather may really seem reliable, but we also thought that the earth was standing still for a very long time.
Fact just isn't an epistemological category that I have, and it's not one that I find useful. There are only models. Some models are more descriptive and better than others, some are more supported by evidence. But there aren't facts, there are no fixed points that I'm 100% sure are true. I consider my knowledge that 2+2=4 to be close to certain as anything just about anything else I believe, but I hesitate to call it a fact. I have that belief because it's always been true in the past and my brain has learned that induction is reliable. I could be convinced that 2+2=3, and if you believe something only because you have evidence to support it, then you must have a model that translates between the evidence and the belief.
Because the rate of climate change during the Pleistocene would have made long term forecasting difficult.
Huh? Can you, um, provide some links?
I'm hardly an expert on this, but searching for Pleistocene climate variation gives results like this:
"In addition to the well known millennium-scale stadial and interstadial periods, and the previously recognized century-scale climate events that occur during the Allerod and Bolling periods, we detect a still higher frequency of variability associated with abrupt climate change."
"The seasonal time resolution of the ECM record portrays as aspect of the climate system that consistently and frequently chnages between glacial and near-interglacial conditions in periods of less than a decade, and on occassion as rapidly as three years."
Climate Change: Natural climate change: proxy-climate data
The idea that people aren't, by nature, optimal decision makers is one of the core ideas of LW.
We're not talking about optimal decisions. We're talking about not screwing up. Humans are the most successful species on this planet -- they are capable of not screwing up sufficiently well.
We are specifically talking about the claim, "Would you seriously argue that people choose to have children as a reasonably optimal selfish way of guaranteeing that they continue to have enough to eat once they're no longer capable of working?"
I am not making the argument that there are no advantages to having and raising children from a retirement perspective. I am making the argument that it is unlikely that people choose to have children in order to obtain those advantages. I am making the argument that the decline in birthrate in unlikely to be due to people adjusting the number of children they have as part of a retirement plan. The success of a species has very little to do with the ability of individual members of that species to plan in such a way as to maximize their own well-being. Ants are collectively one of the most successful organisms in the world, but they certainly don't engage in long term planning.
Indeed it the success of the human species that I would cite as evidence for my assertion that human behavior is more closely linked to genetic self-interest than to personal self-interest. Cultural and social success is a huge factor in genetic self-interest. There's a reason that humans have large brains and devote so many resources to processing social relationships and facial cues. We have equipment for obeying social mandates. We understand them intuitively. We don't have have intuitive equipment for making long-term predictions, since that was selected for.
status ... it seems to be the thing that people care most about after short term economic incentives
Evidence please. People certainly care about status, but I don't think that people always care about money first, status second, and everything else after that.
I consider the word of a Nobel Prize-winning game theorist and economist to qualify as "evidence" on the topic of aggregate human behavior. If you don't consider the opinions of experts evidence, what qualifies?
I don't understand why you think that human allegiances have to be founded on the nuclear family.
They don't have to be, but I think that empirical evidence points to family ties binding more tight than others.
Okay, but that doesn't necessarily matter. The ties don't have to be tight, they just have to be adequate. Also, the parent->child bond is typically tighter than the "child->parent" bond. But even if we add an uncertainty cost to forming non-parent child relationships, it's not obvious to me that children are a good investment. Children die. Children turn out to be non-productive. Children require lots of resources. Even if my teenage apprentice may be less likely to support me, he's still way cheaper to build a bond with and way more likely to survive to adulthood. I don't see any good reason to birth children rather than recruit apprentices.
I'm not sure what you mean by fact.
I mean an observable and testable chunk of empirical reality. Not a theory, not an explanation, not a model.
I don't know that we have access to facts. Everything is interpreted. Everything is a model. Fact isn't a separate epistemological category. There are things we agree on, even things most people agree on, but I'm not sure what hard and fast distinction you could draw between facts and theories.
You are claiming that humans have evolved the psychological capacity to make decades long judgments in a reasonably optimal way
That seems pretty obvious to me. What, you think no one ever saves for retirement? Why do you believe that to be false?
Because humans engage in hyperbolic discounting. Because the rate of climate change during the Pleistocene would have made long term forecasting difficult. Because I don't see evidence of people making medium term judgments in a reasonably optimal way. The idea that people aren't, by nature, optimal decision makers is one of the core ideas of LW.
Unlike planning for retirement, achieving success within your cultures definition of it (i.e. status) is very important from a genetic evolution status
So how come there are so many losers around? X-) Note that culture is a fairly recent development in "genetic evolution" and for a very long time "high status" implied a front row at the feast, but also a front row at the battle. I agree that high status helped survival, but I don't think it helped it enough so that evolution gave a major push to the fight-for-leadership genes.
I'm not actually sure that culture is recent. I would put the origins of culture at least tens of thousands of years ago, which is definitely appreciable on an evolutionary scale.
Also, status isn't necessarily the same thing as leadership, and it seems to be the thing that people care most about after short term economic incentives (e.g. “apart from economic payoffs, social status seems to be the most important incentive and motivating force of social behavior."-John Harsanyi). The prevalence of the human desire for social status seems pretty well-supported by the literature.
P.S. I'm enjoying this conversation.
That, actually, depends on the circumstances. But in any case, do you really suggest making friends as a good solution to who-will-feed-me problem? Don't forget that they will get old, too.
Human tribes have been a thing for about as long as there have been humans. People with an important role in the tribe don't starve to death. And yes, friends age, and so do children. You can make friends that aren't the same age as you. I don't understand why you think that human allegiances have to be founded on the nuclear family.
The reality is precisely what is being debated.
Is it? On which facts do we disagree?
I'm not sure what you mean by fact. You made the claim that in reality people have children because they think it's a good retirement option, and that they choose the number of children that they will choose to have based on how many children they will need in order to make sure they don't starve to death in the real world. You are claiming that humans have evolved the psychological capacity to make decades long judgments in a reasonably optimal way and that they use that capacity when deciding how many children to have. That is a claim about reality. If it were true, it would be a fact. I think that it is false. I think that people choose whether or not to have children based on culture, and that culture is largely determined by the rules "Copy what most people are doing" and "Copy what successful people are doing". (That's not a commentary of the depth or richness of culture. Complex systems often have simple rules.) I also think that successful people in the present and near-past have tended to have less children and I think that the falling birthrate can be attributed to that. I'm fairly sure that the falling birthrate has much more to do with cultural definitions of success than with anyone's concern for feeding themselves 40 years in the future.
Unlike planning for retirement, achieving success within your cultures definition of it (i.e. status) is very important from a genetic evolution status and would be selected for. I think it's much more likely that evolution equipped humans to seek cultural success, than it is that evolution equipped humans to sacrifice having children based on concerns for how best to spend their reproductively inactive retirement.
Parents often devote significant resources to caring for special needs children who are unlikely to grow into good providers.
All the more reason to have a large extended family. These children will grow into adults who continue to need extra support, and there's no reason for parents to support them on their own. The more siblings you have to help out the better.
From a selfish perspective, the correct decision isn't to have more children. It's to kill or disown the ones who not only won't repay your investment, but will actually compete with you for the return on your other investment in your other children.
It's also not totally obvious to me that children are a particularly good investment from a long-term wealth or even a guaranteed income perspective.
This is because you are thinking of wealth as money. For much of the population of the world, and increasingly so as you go back in time, wealth means enough food on the table, enough food in the root cellar to get you through the winter, and enough grain seed to replant + keep you alive a year or two if the crops fail + plus enough to plant again once the famine is over. As long as another set of hands increases productivity, another pair of hands is a good investment.
I have a pretty broad-minded view of wealth actually. If you're a New Guinea highlander you can invest in mokas. You can trade your neighbors for goats or land. You can accumulate social capital by being generous and well-liked. You can enter into partnerships with younger partners. Another set of hands is only a good investment if it offers nearly the best return for investment, which is a much higher hurdle than merely "increasing productivity." It would actually be enormously surprising if the best selfish return you could possibly get for your time and effort was finding a mate and having children, especially given the high infant/child mortality rates. If children were such a good investment then why did we need a modest proposal?
Women in general were low status. Many of their concerns and desires were ignored unless they happened to match concerns and desires that benefitted men. The fact that women didn't have alternatives to being a mother was just a special case of that..
How did men benefit? Did all men benefit? Were the men also constrained by cultural roles that served to benefit women?
women's desires were considered irrelevant by society.
This is too strong a statement
Almost any statement interpreted while ignoring connotation is too strong. "Women's desires were considered irrelevant by society" means "an important set of women's desires relevant to the current conversation were considered irrelevant by society", not "all women's desires were considered irrelevant by society". Don't ignore connotation.
Context is probably a better word to use than connotation.
My argument is precisely that women's desires were considered relevant. I think that society, which, is after all about half women, never has considered the desires of women to be irrelevant nor has it ever considered the desires of men to be irrelevant. Society has definite opinions about what sorts of desires are socially appropriate, but that's very different from considering desires irrelevant. I think that your objection is about a perceived lack of social roles, especially formal social roles, for unmarried women in some subset of human cultures. Most traditional human societies also lack important social roles for unmarried men.
The transition to an emphasis on personal merit as a source of status rather than familial success has created high status social roles for both men and women outside of the context of family and reproduction. Because men were less tied to reproduction both biologically and culturally, that transition disproportionately affected men at its beginning and for a while Western cultures had many social roles for unmarried men and virtually none for unmarried women. But that was a fairly anomalous period in human history, and for the vast majority of history women have been just about as important to human economic production as men, and as the status of child production has continued to drop, fathers and mothers both have encouraged their daughters to pursue education and careers and other paths desires that lead to positions of high social status.
Being low status has always meant being vulnerable to social violence, and ascribing status is one of the ways that societies create and maintain social norms. The attractiveness of a position in society is dictated by the value and status society ascribes to it, and that valuation is always a set of "external reasons". Particularly low status groups or members of society, who are perceived as different or in violation of important social norms are often ascribed the status of "criminal" or "enemy" and are left especially vulnerable to social violence.
women's desires were considered irrelevant by society.
This is too strong a statement. Many women desire to have children. That desire was hardly considered irrelevant by society. Similarly, many women desire to get married or to worship God. These desires weren't considered irrelevant by society. Quite the opposite. It was considered very important that women have these desires. Desires that led to high social status, like wanting to marry a young man in good standing in the community, were strongly encouraged. Desires that led to low or uncertain social positions like becoming a transient were discouraged, and if pursued in spite of discouragement, punished for undermining the social order. Society almost never respects the desire to become a social deviant.
A society that accords value based on nuclear family size has the social roles of mother, father, and provider and ascribes status to its members based on their success in those roles. In traditional societies, both men and women have jointly fulfilled the latter role. The proliferation of new social roles and highly-esteemed places in the community that have nothing to do with the nuclear family, (i.e. having achievements other than children become markers of status) is the reason that men and women both have a place in the community other than as parents and children.
A man living in a tribe of subsistence foragers can't ever choose to become a full-time string theorist. He can sometimes choose to become a full-time shaman or priest. Both professions study mainly imaginary things, the difference is that one of these roles is ascribed status by the community while the other is not.
I don't see humans commonly engaging in a lot of decades off long-term thinking
No particular need. First, it is what happens by default if you don't take heroic birth control measures (remember, no pill or effective condoms), second, it's culturally ingrained, that's what everyone does
I'm a little uncomfortable classifying infanticide as heroic, but that aside I feel like your claim is shifting. At first you claimed that people choose to have children because they are making an optimal selfish long-term retirement decision and that they choose to have children as a good investment in service to that goal. Now you're saying that people don't really choose to have children for that reason, but that they have children in response to biological pressures and cultural norms. But the claim that family size is driven primarily by cultural norms, which are largely dictated by the perception of which behaviors are regarded as high status, is literally my original claim.
It's also not totally obvious to me that children are a particularly good investment from a long-term wealth or even a guaranteed income perspective.
You're a peasant in Mozambique. Or in XII-century France. What are your other options?
Make friends with people that I didn't help create? Accumulate wealth? There are lots of durable human social institutions other than the nuclear family. There are certainly more of them in the modern world, but it's not like all those childless medieval monks starved to death.
But why is that even an option?
You seem to be surprised that what evolutionary psychology says must happen does not happen in reality. I would like to suggest that this a problem for the theory, not for the reality.
The reality is precisely what is being debated. I am making the claim that the choices that populations of people make, esp. with regard to family size, can be understood in terms of evolution and selection, and that they should reflect, in some form or fashion adaptations consistent with genetic self-interest. You are making the claim that people's choices are more driven by their own rational self-interest, and that understanding the incentives available to individual rational actors is the better predictor of behavior. It seems to me that here you're just labeling your claim "reality" and saying that if evolution disagrees with it then that's a problem with evolution.
That doesn't explain why people choose to have small families.
No, but that explains why that choice exists.
Not really. Humans have exercised control over family size for thousands of years via all sorts of different mechanisms. Modern birth control is certainly more convenient than the vast majority of ancient mechanisms, but it's not clear that the increase in convenience is why the modern world is a lot less excited than the ancient one about the command "Go forth and multiply."
Would you seriously argue that people choose to have children as a reasonably optimal selfish way of guaranteeing that they continue to have enough to eat once they're no longer capable of working?
Yes, seriously, I find nothing outlandish about this assertion. Why are you so surprised?
It's not a totally unreasonable argument, it's just very contrary to my personal experience. I don't see humans commonly engaging in a lot of decades off long-term thinking, and child-creating/child-rearing typically seems to be dominated by a lot of deep instinctual emotional cues. Parents often devote significant resources to caring for special needs children who are unlikely to grow into good providers. Parents seem to derive a lot of satisfaction and to compete for status based on the way that their children perform in school or child sports or other competitions, even though these are very weakly linked to any sort of economic productivity. Hutterite communities practice common ownership and thus have a more extensive social safety net than even modern societies, but traditionally have large average family sizes.
It's not obvious to me that humans reproduce for reasons substantially different from the reasons that other animals reproduce, and it's very obvious that most creatures aren't having children in order to secure their retirement. The tendency of parents to take exceptional risks in order to protect their offspring is also better explained by these traits being promoted by genetic self-interest than the idea that children are a rational agent's retirement plan. It seems profoundly weird to me that all the other animals reproduce because their genes tell them to, but humans just so happen to make the same exact decision for a completely different reason.
It's also not totally obvious to me that children are a particularly good investment from a long-term wealth or even a guaranteed income perspective. I feel like if most people directed the same amount of resources into securing their retirement via other means that they typically direct into child-rearing they would often end up better off. It's an interesting thesis though, and it has some neat fits to the data. The falling birthrate, it could be argued, is due to the fact that there are more long-term investment opportunities now than there were in the past and people have chosen to diversify their investments. At the same time, modern decreases in child-mortality and death in child-birth might make children a more attractive investment than they were in the past, so there are definitely some issues.
Because if you're managing the number of your children, you're managing the number of children who'll grow up to adulthood.
But why choose to have fewer children grow to adulthood rather than more? If my children are more likely to survive to adulthood then I should have more children, not fewer, since child birthing and raising is now a better investment than it was before.
Clearly, people are interested in more than that and on a very regular basis choose NOT to maximize the spread of their genes.
But why is that even an option? What evolutionary advantage is so adaptive that even though it leads to obviously maladaptive behavior like choosing not to have children that you could easily support the adaptation is still a net-positive on average?
I'm not sure what the difference is between [Women have attractive alternatives to just being a mother] and "achievements other than sex and children have become markers of status".
The difference is you're talking solely about status and I'm talking about a much wider context.
I think the more fundamental difference may actually be that I'm talking about population level processes and you're thinking about things on the level of individual decision makers. I think most human decisions are constrained fairly tightly by the culture that they find themselves in and that copying high status individuals is one of the driving forces of cultural change. I think cultures that have family size as a primary marker of achievement and status don't create the opportunity for people to support themselves by going to graduate school to study science or math instead of raising a family. Social status is very much about having a place in the community, and cultures don't create good attractive places in the community for occupations with low social status. An individual may choose to pursue a Ph.D. in math rather than a more conventionally high status career in finance, but a society that doesn't see educational attainment as a mark of status isn't going to have a career track for graduate students at all.
I'm not sure that this is true, or maybe I'm not sure that considering things on average is a good measure of surprise. Finding out you were wrong about something is much more surprising than learning something in the first place. Limited reasoners tend to discard alternative hypotheses when something fits the data well enough. Learning that the earth was flying through space around the sun even though it really doesn't feel like it is was much more surprising to me than it would have been if I hadn't seen the ground so stubbornly sitting still for most of my life. I feel like the more I learn, the more surprised I am when something is different than I expect it to be. I may be surprised less, but my increased confidence in my model makes those surprises all the more salient.
Birth control is widely available;
That doesn't explain why people choose to have small families. In the Iliad the 50 rooms filled with Priam's sons are a mark of his wealth and power, guaranteeing the success and continuity of his bloodline. They aren't an accident. In developing nations people are proud of their large families and they regard as unfortunate people who only have a few children. Birth control may enable the transition, but it doesn't explain the stark difference in attitude.
Social safety nets (and middle-class wealth) reduce the need for children as someone who feeds you in your old age;
Would you seriously argue that people choose to have children as a reasonably optimal selfish way of guaranteeing that they continue to have enough to eat once they're no longer capable of working?
If you children's chances to survive to adulthood are very high you don't need to give birth to that many;
Why not? It's certainly helps maximize my genetic fitness. In general is an animal discovers a new environment with plentiful resources and no predators it doesn't suddenly decide to have less children, since they now have a higher chance of surviving to adulthood. It certainly doesn't end up with fewer adult children than it had before, but that's exactly the pattern we observe in many developed nations.
Women have attractive alternatives to just being a mother
I'm not sure what the difference is between this statement and "achievements other than sex and children have become markers of status".
The desire for children isn't, in general, a rational one. It is rarely in the rational self-interest of a creature to reproduce. My instinct is that the desire for more/less children is driven more strongly by the selfishness of genes and memes than the selfishness of organisms. Either way, the genetically maladaptive decision to limit reproduction in spite of being able to easily feed larger families is certainly not limited to the LessWrong subculture, however poorly defined that is, but appears to be the social norm in almost all developed nations.
But face it. You're weird. And I mean that in a bad way, evolutionarily speaking. How many of you have kids? Damn few. The LessWrong mindset is maladaptive. It leads to leaving behind fewer offspring.
It's surprisingly not weird. Birthrates in the developed world have plummeted precisely because achievements other than sex and children have become markers of status. Having a large family is no longer seen as an indicator of high status but as something that makes you a bit of a cultural oddball, and that attitude is spreading. Cultural evolution happens much more quickly than its biological counterpart, and the trend away from cultural success being defined in terms of reproductive success is currently dominating genetic pressures towards organizing our lives around increasing genetic fitness.
EDIT: I tried to keep this comment fairly brief, since it's only tangentially related to why people might want to die, but it seems to have engendered a fairly lively discussion, so it may be worth adding the following clarification to this top-level comment:
In traditional human cultures, family size is pretty well-correlated with social status. While the exact cause of this correlation can be debated, having a large family is typically regarded as high status. Much of human culture is driven by individuals copying behavior they perceive to be common and high status, and the correlation between status and family size leads to norms and traditions that promote family and family size as important sources for status within many human cultures. In light of this, and also the genetic adaptiveness of large families referred to in the original post, it's somewhat surprising to see that this correlation has been inverted. High status individuals very rarely have large families, despite being easily able to support additional children. Because culture is based on imitation and is self-reinforcing it's not particularly important how this inversion came about, but it will serve to normalize behavior that focuses on education and career at the expense of child creation/child rearing, a norm which appears to still be spreading/gaining strength based on demographic data on birth rates over time.
My speculative hypothesis, although it is certainly not original, is that the falling birthrate is the culmination of a set of cultural pressures that are rooted in the rise of industrialization. As industrialization spreads, there is a decoupling of economic wealth and family, since wealth production is no longer tied to land, titles, and other generational resources. Because wealth increases over time in industrial economies, individuals are incentivized to delay marriage because their increasing wealth affords them access to high status mates if they wait longer. The delayed marriages both directly decrease family size, and result in a population of unmarried individuals who can't use legitimate family size as a status measure. These individual then compete for status on the basis of wealth, education, etc. This self-reinforcing cycle erodes cultural norms that say having a large family is a mark of status and replace them with norms that reward to respect to people with money, important jobs, advanced degrees, or a large number of twitter followers.
While having less offspring is obviously genetically maladaptive, copying high status behaviors has traditionally been a very effective strategy in the human gene pool. Over long time periods we may see genetic adaptations that encourage the production of offspring and a corresponding rise in human family size, but the cultural shift towards more education and less children that we have observed so far has happened too quickly for genetic selection to really offset. While LW may differ from the larger culture in many ways, in this instance, LW is very normal, very much a product of the larger cultural attitude drift.
I feel like this may be a semantics issue. I think that order implies information. To me, saying that a system becomes more ordered implies that I know about the increased order somehow. Under that construction, disorder (i.e. the absence of detectable patterns) is a measure of ignorance and disorder then is closely related to entropy. You may be preserving a distinction between the map and territory (i.e. between the system and our knowledge of the system) that I'm neglecting. I'm not sure which framework is more useful/productive.
I think it's definitely an important distinction to be aware of either way.
But if I know that all the gas molecules are in one half of the container, then I can move a piston for free and then as the gas expands to fill the container again I can extract useful work. It seems like if I know about this increase in order it definitely constitutes a decrease in entropy.
The 2nd law is never violated, not even a little. Unfortunately the idea that entropy itself can decrease in a closed system is a misconception which has become very widespread. Disorder can sometimes decrease in a closed system, but disorder has nothing to do with entropy!
Could you elaborate on this further? Order implies regularity which implies information that I can burn to extract useful work. I think I agree with you, but I'm not sure that I understand all the implications of what I'm agreeing with.
Meditation (empirical/practical emphasis), and more broadly the psychology associated with executive function and attentional control.
Set theory, topology, deep learning. Probably most math/computer science topics.
Anything that someone thinks they have a really good intuitive explanation for. Omniscience was one of my life goals when I was growing up.
Physics, quantum mechanics, related math concepts like linear algebra, abstract vector spaces, differential equations, calculus.
Much of the material in the LW sequences.
Optimization and machine learning. Also, shell scripting, python, perl, matlab, computability, numerical methods, basic data structures and algorithms.
More randomly: electrochemical energy storage, Li-ion batteries, distance running, dog training, Christian theology, Latin, English/American literature, poetry.
I like this idea. There are lots of things that I know and even more things that I'm interested in knowing, but I'm not sure I understand how it would play out.
How much tutoring experience do you have? What sorts of resources would there be for tutors? How long do you see tutor relationships lasting? What does it look like to tutor someone in Python Programming. Is this person trying to learn python on their own? Are they following a guide I'm familiar with? I know calculus like the back of my hand, but that doesn't mean I have lesson plans mapped out. Tutoring is typically done in-addition to some other form of education.
I'm not sure that every student can be a teacher, and if you want people to be good teachers, I think it will require giving them access to a lot more resources than a GoogleGroups page and webcam.
I like the concept, I'd be interested in seeing a more fleshed out description.
I'm an advocate of this approach in general for a number of reasons, and it's typically how I explain the idea of FAI to people without seeming like a prophet of the end times. Most of the reasons I like value-learning focus on what happens before a super-intelligence or what happens if a super-intelligence never comes into being.
I am strongly of the opinion that real world testing and real world application of theoretical results often exposes totally unanticipated flaws, and it seems like for the value-learning problem that partial/incomplete solutions are still tremendously useful. This means that progress on the value-learning problem is likely to attract lots of attention and resources and that consequently proposed solutions will be more thoroughly tested in the real world.
Some of the potential advantages:
Resources: It seems like there's a strong market incentive for understanding human preferences in the form of various recommendation engines. The ability to discern human values, even partially, translates well into any number of potentially useful applications. Symptoms of success in this type of research will almost certainly attract the investment of substantial additional resources to the problem, which is less obviously true for some of the other research directions.
Raising the sanity waterline: Machines aren't seen as competitors for social status and it's typically easier to stomach correction from a machine than from another person. The ability to share preferences with a machine and get feedback on the values those preferences relate to would potentially be an invaluable tool for introspection. It's possible that this could result in people being more rational or even more moral.
Translation: Humans have never really tried to translate human values into a form that would be comprehensible to a non-human before. Value learning is a way to give humans practice discovering/explaining their values in precise ways. This, to my mind, is preferable to the alternative approach of relying on a non-human actor to successfully guess human morality. One of my human values is for humans to have a role in shaping the future, and I'd feel much more comfortable if we got to contribute in a meaningful way to the estimate of human values held by any future super-intelligence.
Relative Difficulty: The human values problem is hard, but discovering human values from data is probably much harder than just learning/representing human values. Learning quantum mechanics is hard, but the discovery of the laws of quantum mechanics was much much more difficult. If we can get human values problem small enough to make it into a seed AI, the chances of AI friendliness increase dramatically.
I haven't taken the time here to consider in detail how the approaches outlined in your post interact with some of these advantages, but I may try and revisit them when I have the opportunity.
Just happened across this article summary today about people using atomic spectra to look for evidence of dark matter. I don't know that they've found anything yet, but it's sort of neat how closely related your proposal here is to their research.
The true pattern (i.e. the many-particle wavefunction) is smooth. The issue is that the pattern depends on the positions of every electron in the atom. The variational principle gives us a measure of the goodness of the wavefunction, but it doesn't give us a way to find consistent sets of positions. We have to rely on numerical methods to find self-consistent solutions for the set of differential equations, but it's ludicrously expensive to try to sample the solution space given the dimensionality of that space.
It's really difficult to solve large systems of coupled differential equations. You run into different issues depending on how you attempt to solve them. For most machine-learning type approaches, those issues manifest themselves via the curse of dimensionality.
I really like this topic, and I'm really glad you brought it up; it probably even deserves its own post.
There are definitely some people who are trying this, or similar approaches. I'm pretty sure it's one of the end goals of Stephen Wolfram's "New Kind of Science" and the idea of high-throughput searching of data for latent mathematical structure is definitely in vogue in several sub-branches of physics.
With that being said, while the idea has caught people's interest, it's far from obvious that it will work. There are a number of difficulties and open questions, both with the general method and the specific instance you outline.
As far as I know, (1) we assume that the laws of the universe are simple
It's not clear that this is a good assumption, and it's not totally clear what exactly it means. There are a couple of difficulties:
a.) We know that the universe exhibits regular structure on some length and time scales, but that's almost certainly a necessary condition for the evolution of complex life, and the anthropic principle makes that very weak evidence that the universe exhibits similar regular structure on all length/time/energy scales. While clever arguments based on the anthropic principle are typically profoundly unsatisfying, the larger point is that we don't know that the universe is entirely regular/mathematical/computable and it's not clear that we have strong evidence to believe it is. As an example, we know that a vanishingly small percentage of real numbers are computable; since there is no known mechanism restricting physical constants to computable numbers, it seems eminently possible that the values taken by physical constants such as the gravitational constant are not computable.
b.) It's also not really clear what it means to say the laws of physics are simple. Simplicity is a somewhat complicated concept. We typically talk about simplicity in terms of Occam's razor and/or various mathematical descriptions of it such as message length and Kolmorogov complexity. We typically say that complexity is related to how long it takes to explain something, but the length of an explanation depends strongly on the language used for that explanation. While the mathematics that we've developed can be used to state physical laws relatively concisely, that doesn't tell us very much about the complexity of the laws of physics, since mathematics was often created for just that purpose. Even assuming that all of physics can be concisely described by the language of mathematics, I'm not sure that mathematics itself is "simple".
c.) Simple laws don't necessarily lead to simple results. If I have a set of 3 objects interacting with each other via a 1/r^2 force like gravity there is no general closed form solution for the positions of those objects at some time t in the future. I can simulate their behavior numerically, but numerical simulations are often computationally expensive, the numeric results may depend on the initial conditions in unpredictable ways, and small deviations in the initial set up or rounding errors early in the simulation may result in wildly different outcomes. This difficulty strongly affects our ability to model the chemical properties of atoms. Since each electron orbiting the nucleus interacts with each other electron via the coulomb force, there is currently no way to exactly describe the behavior of the electrons even for a single isolated many-electron atom.
d.) A simple set of equations is insufficient to specify a physical system. Most physical laws are concerned with the time evolution of physical systems, and they typically rely on the initial state of the system as a set of input parameters. For many of the systems physics is still trying to understand, it isn't possible to accurately determining what the correct input parameters are. Because of the potentially strong dependence on the initial conditions outlined in c.), it's difficult to know whether a negative result for a given set of equations/parameters implies needing a new set of laws, or just slightly different initial conditions.
In short, your proposal is difficult to enact for similar reasons that Solomonoff induction is difficult. In general there is a vast hypothesis space that varies over both a potentially infinite set of equations and a large number of initial conditions. The computational cost of evaluating a given hypothesis is unknown and potentially very expensive. It has the added difficulty that even given an infinite set of initial hypotheses, the correct hypothesis may not be among them.
Noise is certainly a problem, but the biggest problem for any sort of atomic modelling is that you quickly run into an n-body problem. Each one of of n electrons in an atom interacts with every other electron in that atom and so to describe the behavior of each electron you end up with a set of 70 something coupled differential equations. As a consequence, even if you just want a good approximation of the wavefunction, you have to search through a 3n dimensional Hilbert space and even with a preponderance of good experimental data there's not really a good way to get around the curse of dimensionality.
That's not really true. You can write a review article as one of your first publications and use it to lay out what you intend to work on. People won't take your review article as seriously as they will one written by Dr. Bigshot et al., but there certainly aren't any rules against it.
Also, the NSF is thrilled if you're a beginner and you're doing any sort of popular outreach. They love pop science blogs.
Thanks so much for your thoughtful response. This clarifies the position dramatically and makes it sound much more attractive. If I have any further questions related to my application specifically, I'll certainly let you know.
It's sort of not that useful though. This is a description of the "shovel-ready" projects and those are actually pretty straight-forward. If you fit into one of those categories, you'd basically be under a single person with a well-defined discipline and you can get a pretty good sense of who you'd be working for by scanning a half-dozen paper abstracts if you're not already familiar with them. There's a decent chance you're actually funded directly out of the individual professor's research grant. It's pretty business as usual.
But being a post-doc for an interdisciplinary center can be a lot more confusing. If the center has someone who is an expert in your field then they're semi-qualified to supervise your work and they sort of become your boss by default. If there isn't an expert in your field, the standard academic mentor-apprentice model starts to break down and it's not always clear what will replace it. Sometimes you become predominantly a lackey/domain expert/flex researcher for existing projects. Sometimes the center recruits someone to mentor you. Some you are expected to develop a novel focus for the group. And if the group has been around for a while you can estimate a lot of these answers just from publication history, but with something brand new it's much harder.
And this is a stupid hard problem to even describe. It isn't clear what department "All the things that might possibly go wrong that would make us all die" belongs in. On some level I understand why the "Specialist knowledge and skills" are super vague general things like "good level of scientific literacy" and "strong quantitative reasoning skills." And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I'd like to have some sense of whether anyone would even read them or whether they'd just be confused as to why I applied.
Candidates should have a PhD in a relevant field
I'm really curious as to what constitutes a relevant field. The 3 people you list are an economist, a conservation biologist, and a someone with a doctorate in geography. Presumably those are relevant fields, but I don't know what they have in common exactly.
I don't know what to think about this. You're new and you have sort of unconventional funding and a really broad mission statement. I'm not really sure what sort of research you're looking for or what journals it would be published in. I can't tell how much of this is science and how much of this is economics or political science and your institute is under the umbrella of the Arts and Humanities Research Center. What sorts of positions do you envision your post-doctoral fellows taking two years down the road?
This is definitely interesting, but I'm not sure that I have any actual idea who you're looking for and having read your website and downloaded the job listing and read the bios of the people involved, I'm still not really sure. I can't figure out whether this seems sort of vague and confusing because it isn't directed at me or because you're still sort of figuring out the shape of the group yourself.
This is phenomenally clear thinking and has clarified something I've been struggling to understand for the last 10 years. Thank you.
So the extent to which various traits are adaptive vs. maladaptive is an interesting question. There are a lot of hidden trade-offs, especially when you start discussing cognitive heuristics. Modern life also has some fairly different selection pressures than our species has historically been exposed to, so maybe some of those instincts are getting out-dated.
But all of that is secondary to a much larger consideration. Evolution doesn't share my goals. Evolution designed my brain for gene propagation. It does a decent job at survival, resource acquisition, and many other problems because those are useful for gene propagation. But I have almost no interest in gene propagation! I'm interested in the truth, even if the truth won't get me laid. My deep suspicion of many of my biological impulses isn't because I suspect natural selection of being a limited bumbling algorithm, but is instead rooted in my conviction that those biological impulses have a different goal in mind than I do.
As a side note, tool development isn't a super useful competitive advantage because it's a lot easier to steal or copy a tool than it is to develop one. The advantage you get from making a new tool is always temporary.
Given how you have set this problem up, what do you think will be the relative prices of the 4 contracts you specified?
I understand that we're capable of calculating P(A|B), but if P(A|B) isn't on the market, then the market won't reflect the value of P(A|B). So I don't understand your statement that the market will somehow get the answer wrong because of its estimate of P(A|B). The market makes no value estimate of that quantity.
Your market, as stated, is really strange in a lot of ways. By having the contracts include "Bush wins" or "Clinton wins" the market is essentially predicting itself. It's going to have really strong attractors for a landslide victory. It seems like that isn't what you intend, but it's going to be the consequence of your current set up. Judging by the number of other people who have also replied that they are confused, you may want to rework this example.
The normalization is because we want to compare what happens conditional on Hillary being elected to what happens conditional on Jeb is elected. These probabilities will not be comparable unless we normalize.
Why would we want to do this? Your contracts aren't structured in such a way that they encourage these sorts of conditional considerations. P(A|B) isn't on the market. P(A and B) is. Maybe you meant for your contracts to be "If Hillary is elected, the U.S. will be nuked." ?
if we normalize by dividing by the marginal probability that Hillary is elected, we get 1/3 which is equal to Pr [Nuked | Clinton Elected]. In other words, the prediction market estimates the wrong quantities.
Why are you doing this normalization? It doesn't seem related to the 4 contracts on your prediction market in an obvious way.
There was a common cause of Hillary being elected and the US being nuked. This common cause - whether Kim Jong-Un was still Great Leader of North Korea
I'm confused as to how Kim Jong-Un being leader of NK "causes" Hillary to be elected. That seems to go against state 5 in your table.
The reason that AI wants to turn the universe into paperclips is because it's the 2nd coming of Clippy.
I'm not sure that the number of possible states of the universe is relevant. I would imagine that the vast majority of the variation in that state space would be characterized by human indifference. The set of all possible combinations of sound frequencies is probably comparably enormous, but that doesn't seem to have precluded Pandora's commercial success.
I have to categorically disagree with the statement that people don't have access to their values or that their answers about what they value will tend to be erratic. I would wager that an overwhelming percentage of people would rather find 5 dollars than stub their toe. I would furthermore expect that answer to be far more consistent and more stable than asking people to name their favorite music genre or favorite song. This reflects something very real about human values. I can create ethical or value situations that are difficult for humans to resolve, but I can do that in almost any domain where humans express preferences. We're not going to get it perfectly right. But with some practice, we can probably make it to acceptably wrong.
How are human values categorically different from things like music preference? Descriptions of art also seem to to rely on lots of fairly arbitrary objects that it's difficult to simplify.
I'm also not sure what qualifies as unacceptably wrong. There's obviously some utility in having very crude models of human preferences. How would a slightly less crude model suddenly result in something that was "unacceptably" wrong?
I don't think that trying to solve the Schrödinger equation itself is particularly useful. The SE is a partial differential equation, and there's a whole logic of differential equations and boundary conditions, etc. that provides context for the SE. If you're serious about trying to understand quantum mechanics, I think the concept of Hilbert space/abstract vector spaces/linear algebra in general is a bigger conceptual shift than just being able to solve the particle in a box in function space. It's also just a really useful set of concepts that makes learning things like optimization, coordinate/fourier transforms, etc. easier/more intuitive.
Until I had the wave function explained to me as some vector in a high dimensional space that we could map into x-space or p-space or Lz-space I don't think I really had a good grasp on quantum mechanics. This is anecdote not data, your mileage may vary.
I don't understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me?
I don't have a good logical reason for why my life is a lot more valuable than anyone else's. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can't come up with a good reason to have a dominantly large "Life of leplen" term in my utility function. Much of the data suggests that happiness/life quality isn't well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn't I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn't seem efficient to keep investing in myself for little to no return.
I wouldn't feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I'd probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn't mean I'm entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky.
With that being said, I certainly don't give anywhere near half of my income to charity and it's possible the values I actually live may be closer to what you describe than the situation I outline. I'm not sure, and not sure how it changes my argument.
I'm really curious as to where you're getting the $500B number from. I felt like I didn't understand this argument very well at all, and I'm wondering what sorts of results you're imagining as a result of such a program.
It's worth noting that 1E30-1E40 is only the cost of simulating the neurons, and an estimate for the computational cost of simulating the fitness function is not given, although it is stated that the fitness function "is typically the most computationally expensive component". So the evaluation of the fitness function (which presumably has to be complicated enough to accurately assess intelligence), isn't even included in that estimate.
It's also not clear to me at least that simulating neurons is capable of recapitulating the evolution of general intelligence. I don't believe it is a property of individual neurons that causes the brain to be divided into two hemispheres. I don't know anything about brains, but I've never heard of left neurons or right neurons. So is it the neurons that are supposed to be mutating or some unstated variable that describes the organization of the various neurons. If the latter, then what is the computational cost associated with that super structure?
I feel like "recapitulating evolution" is a poor term for this. It's not clear that there's a lot of overlap between this sort of massive genetic search and actual evolution. It's not clear that computational cost is the limiting factor. Can we design a series of fitness functions capable of guiding a randomly evolving algorithm to some sort of general intelligence? For humans, it seems that the mixture of cooperation and competition with other equally intelligent humans resulted in some sort of intelligence arms race, but the evolutionary fitness function that led to humans, or to the human ancestors isn't really known. How do you select for an intelligent/human like niche in your fitness function? What series of problems can you create that will allow general intelligence to triumph over specialized algorithms?
Will the simulated creatures be given time to learn before their fitness is evaluated? Will learning produce changes in neural structure? Is the genotype/phenotype distinction being preserved? I feel like it's almost misleading to include numerical estimates for the computational cost of what is arguably the easiest part of this problem without addressing the far more difficult theoretical problem of devising a fitness landscape that has a reasonable chance to produce intelligence. I'm even more blown away by the idea that it would be possible to estimate a cash value to any degree of precision for such a program. I have literally no idea what the probability distribution of possible outcomes for such a program would be. I don't even have a good estimate of the cost or the theory behind the inputs.
The normal distribution is just a model. You have to be very careful about expectations that happen at 6 sigma. Nothing guarantees your gaussian works well that far from the mean.
Because one requires only a theoretical breakthrough and the other requires engineering. Ideas iterate very quickly. Hardware has to be built. The machines that make the machines you want to use have to be designed, whole industries may have to be invented. A theoretical breakthrough doesn't have the same lag time.
If I work as a theorist and I have a brilliant insight, I start writing the paper tomorrow. If I work as an experimentalist and I have a brilliant insight, I start writing the grant to purchase the new equipment I'll need.