How long has civilisation been going?
post by Elo · 2017-07-22T06:41:20.830Z · LW · GW · Legacy · 37 commentsContents
Let's make it personal Discrete human lives None 37 comments
I didn't realise how short human history was. Somewhere around 130,000 years ago we were standing upright as we are today. Somewhere around 50,000 years ago we broadly arrived at:
the fully modern capacity for Culture *
That's roughly when we started, "routine use of bone, ivory, and shell to produce formal (standardized) artifacts". Agriculture and humans staying still to grow plants happened at about 10,000BCE (or 12,000 years ago).
Writing started happening around 6600BCE* (8600 or so years ago).
This year is 5777 in the Hebrew calendar. So someone has been counting for roughly that long.
The pyramids are estimated to have been built at around 2600 BCE (4600 years ago)
Somewhere between then and zero by the christian calendar we sorted out a lot of metals and how to use them.
And somewhere between then and now we finished up all the technological advances that lead to present day.
But it's hard to get a feel for that. Those are just some numbers of years. Instead I want to relate that to our lives. And our generations.
12,000 years ago is a good enough point to start paying attention to.
If a human generation is normally between 12* and 35* years. Considering that further back the generations would have been closer to 12 years apart and today they are shifting to being more like 30 years apart (and up to 35 years apart). That means the bounds are:
12,000/12=1,000
12,000/35 = 342
342-1000 generations. That's all we have. In all of humanity. We are SO YOUNG!
(if you take the 8600 year number as a starting point you get a range of 717-242.)
Let's make it personal
I know my grandparents which means I am a not-negligible chance to also know my grandchildren and maybe even more (depending on medical technology). I already have a living niece so I have already experienced 4 generations. Without being unreasonable I can expect to see 5 and dream to see 6, 7 or infinite.
(5/1000)->(7/342) = between a half a percent and two percent of human history. I will have lived through 1/2% - 2% of human generations (ignoring longevity escape for a moment) to date.
Compared to other life numbers:
Days in a year * 100 year = 36,500 days in a 100 year lifespan.
52 weeks *100 = 5200. Or one week of a 100 year lifespan is equivalent to one generation of humans.
12,000 years / 365 days = 32.8 years. Or when you are 32 years old you have lived more days than humans have been collecting artefacts of worth.
8600 years/365 = 23.5 years. Or when you are 24 years old you have lived one day for every year humans have had written records.
Discrete human lives
If you put an olden day discrete human life at 25 years - maybe more, and a modern day discrete life at 90 years and compare those to the numbers above
12,000/25 = 480 discrete human lifetimes
12,000/90=133 discrete human lifetimes
8600/25=344 discrete human lifetimes
8600/90=95 discrete human lifetimes
That's to say the entire of recorded history is only about 350 independent human lives stacked end on end.
Everything we know in history has been done on somewhere less than 480 discrete lifetime runthroughs.
Humanity is so young. And we forget so easily that 500 lifetimes ago we were nothing.
Meta: Thanks billy for hanging out and thinking about the numbers with me. This idea came up on a whim and took a day of thinking about and about an hour to write up
Original post: http://bearlamp.com.au/how-long-has-civilisation-been-going/
37 comments
Comments sorted by top scores.
comment by username2 · 2017-07-22T15:33:21.524Z · LW(p) · GW(p)
This year is 5777 in the Hebrew calendar. So someone has been counting for roughly that long.
Nitpick (as it doesn't affect your general argument): What actually happened was at some point some king advisor or prophet applied some guesswork to oral history that bordered on myth (e.g. Noah living 950 years) and decided the world was created in 3761 BCE. This is, in fact, exactly the same logic used by creationists to date the Earth to be ~6000 years old. That's the origin of the Hebrew calendar. There hasn't been 5777 years of continuous counting. More like 3500, maybe.
Replies from: JenniferRM↑ comment by JenniferRM · 2017-07-25T22:49:02.243Z · LW(p) · GW(p)
There are poorly documented rumors running around on the net that the Yorùbá have a religious system that contains a chronological system that says our year 2017 is the year 10,059.
This claim deserves scrutiny rather than trust, and might stretch the idea of a calendar a bit...
It is very hard to find formal academic writing on the subject... Reading around various websites and interpolating, it seems that the cultural group was split in two by the Nigeria/Benin border and so I think there may be no single coherent state power that might back the calendar out of unifying nationalist sentiment. Also they may have no native word for "calendar"? Also it is a lunar calendar of 364 days and the intercalary adjustments might not be systematic and it may have been pragmatically abandoned in favor of the system the international world has mostly been standardizing on...
Still, I personally am interested not only in old surviving institutions but also in things that function as edge cases. Straining words like "old" or "surviving" or "institution". The edge cases often help quite a bit to illustrate the optimization constraints and design pressures that go into very long running social practices :-)
comment by Stabilizer · 2017-07-22T07:42:05.491Z · LW(p) · GW(p)
Umm... 12000/25 is 480. Not 48. All the other numbers in the discrete human lifetimes section should be multiplied by ten. Not as impressive as you might've thought. Still, kinda impressive I suppose.
Replies from: Elo↑ comment by Elo · 2017-07-22T09:00:41.985Z · LW(p) · GW(p)
Oh God I suck that's really bad of me. Will fix.
Replies from: Stabilizer↑ comment by Stabilizer · 2017-07-22T17:21:59.482Z · LW(p) · GW(p)
You might want to correct: "And we forget so easily that 50 lifetimes ago we were nothing."
Replies from: Elocomment by entirelyuseless · 2017-07-22T16:43:28.404Z · LW(p) · GW(p)
This post is probably correct in its facts, but I disagree with the way it seems to be presenting the implications. See here. In other words, sure civilization is relatively young, but humanity is not, and the abilities that civilization is built on are much older than civilization itself.
Replies from: Thomas↑ comment by Thomas · 2017-07-23T10:30:25.197Z · LW(p) · GW(p)
sure civilization is relatively young, but humanity is not
Almost every vegetable now is quite different from its ancestry several hundred years ago. The same goes for dogs and horses. And you think that humans are somehow magically exempted from this rapid evolution?
Not at all. We have some new layers of biological conditions, too.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-23T14:23:54.885Z · LW(p) · GW(p)
We have some new layers of biological conditions, too.
Not in a way that would get anyone to say "that is not a human." I linked to a post that refers to the skeleton of a boy that died 2,000 years ago who diverged from the rest of the human gene pool 250,000 years ago. Now, of course, there has been interbreeding. But even if there hadn't been, anyone saying that those people were not human would deservedly get called a racist.
Replies from: Lumifer, Thomas↑ comment by Lumifer · 2017-07-24T15:11:11.086Z · LW(p) · GW(p)
anyone saying that those people were not human would deservedly get called a racist.
That's an interesting argument. Is this the new criterion of truth or something? Should you believe (or not) certain things depending on what other people might call you?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-25T02:27:39.740Z · LW(p) · GW(p)
Is this the new criterion of truth or something?
No.
Should you believe (or not) certain things depending on what other people might call you?
Mostly no, but in principle it is possible. If you do not want to be called something, a greater probability that you will be called that thing given that you believe something, means a lower utility from believing it. So if you care about what you are called as well as caring about truth, you might have to trade away some truth for the sake of what you are called.
More importantly, my comment included the word "deservedly." If you say something true, and you have good reason to say it, and people call you a racist, that will be undeserved. If you are called that deservedly, it either means the thing was false, or at least that you did not have a good reason to say it. In the case of saying those people were not human, it would be both false, and something there is no good reason to say.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-25T15:23:05.201Z · LW(p) · GW(p)
Lots of verbiage, but I still don't understand your point.
People call other people many things. If you believe that AI is dangerous, some people will call you an idiot. If you are gay, some people will call you a moral degenerate. If you're Snowden, some people will call you a hero and other people will call you a traitor. So what?
As to that "good reason to say it", who judges what's a good reason and what is not?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-26T14:57:38.218Z · LW(p) · GW(p)
The point is that deciding to say something, or even deciding to believe it, is like any other decision, like deciding to go to the store. Human beings care about many things, and therefore many things can affect their decisions, including about what to believe. Let me give an example:
Suppose you think there is an 80% chance that global warming theory is correct. You say, "If I believe that the theory is correct, there will be an 80% chance that I am believing the truth, and a 20% chance that I am believing a falsehood. I get a unit of utility from believing the truth, and a negative unit from believing a falsehood. So that will give me 0.6 expected utility from believing the theory. Consequently I will believe it."
But suppose you also think there is an 80% chance that black people have a lower average IQ than white people. You say, "As in the other case, there is a positive expected utility from the probability of believing the truth, if I believe this. But there is a 99% chance that people will call me a racist, and being called a racist has a utility of -0.8. Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it." Note that if there was a 99% chance that the theory was true in this case, your expected utility would be 0.18, which would be positive, so you would probably choose to believe it. So being called a racist can affect whether you believe it, but it will affect it less when you consider more probable theories.
As to that "good reason to say it", who judges what's a good reason and what is not?
If you mean whose judgement determines it, no one's does, just as no one's judgement determines whether the earth goes around the sun.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-26T15:26:05.920Z · LW(p) · GW(p)
The point is that deciding to say something, or even deciding to believe it, is like any other decision, like deciding to go to the store.
Deciding to say something, sure, but deciding to believe is a bit different. Your degree of conscious control is much more limited there. You can try to persuade yourself, but yourself might not be willing to be persuaded :-/
Suppose you think there is an 80% chance that global warming theory is correct... Consequently I will believe it.
Huh? One of the most basic lessons of LW is that belief in propositions is not binary but a fraction between 0 and 1 which we usually call probability. If you think there is an 80% chance that the global warming theory is correct, this is your belief. I don't see any need to make it an "I believe it fully and with all my heart" thing.
Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it
Correct. This is precisely the difference between people who care about what reality actually is and people who are mostly concerned with society's approval.
Choose your side.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-27T01:44:17.405Z · LW(p) · GW(p)
Your degree of conscious control is much more limited there. You can try to persuade yourself, but yourself might not be willing to be persuaded :-/
I agree that there is often more difficulty, but there is no difference in principle from the fact that you might decide to go to the store, but suddenly be overcome by a wave of laziness so that you end up staying home playing video games.
Huh? One of the most basic lessons of LW is that belief in propositions is not binary but a fraction between 0 and 1 which we usually call probability. If you think there is an 80% chance that the global warming theory is correct, this is your belief.
It is a question of being practical. I agree with thinking of probabilities as formalizing degrees of belief, but it is not practical to be constantly saying "there is an 80% chance of such and such," or even thinking about it in this way. Instead, you prefer to say and think, "this is how it is." Roughly you can analyze "decide to believe this" as "decide to start treating this as a fact." So if you decide to believe the global warming theory, you will say things like "global warming is happening." That will not necessarily prevent you from admitting that the probability is 80%, if someone asks you specifically about the probability.
This is precisely the difference between people who care about what reality actually is and people who are mostly concerned with society's approval.
Choose your side.
All humans care at least a little about truth, but also about other things. So you cannot divide people up into people who care about what reality actually is and people who care about other things like society's approval -- everyone cares a bit about both. Consequently, if some people say, "we care only about truth, and nothing else," those people are saying something false. So why are they saying it? Most likely, it is precisely because of one of the things they care about other than truth: namely looking impressive. Since I care more about truth than most people, including the people who want to look impressive, I will tell the truth about this: I care about truth, but I also care about other things, and the other things I care about can affect not only my actions, but also my thoughts and beliefs.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2017-07-31T15:14:42.162Z · LW(p) · GW(p)
If we believe that global warming of exactly +2 C is going to happen within 100 years with 99.9% probability the most reasonable response is to do geoengineering to counteract those +2 C. One of the primary reasons for choosing a different strategy is that there's a lot of uncertainty involved.
If you grant a 20% chance that global warming isn't happening that geoengineering project would have the potential to mess up a lot.
If people in charge follow the epistemology that you propose I think there's a good chance that humanity won't survive this century because someone thinks taking a 1% chance to destroy humanity isn't a problem.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn't sent out any applications. That's pretty stupid "practical advice".
Elon Musk says that he thought there was a 10% chance when he started SpaceX that it would become a successful company. If he would have followed your advice he wouldn't have started that valuable company and the same is likely true for many other founders who have a realistic view of their success.
One of the core reasons why Eliezer wrote the sequences is to promote the idea that low probability high impact events matter a great deal and as a result, we should invest money into X-risk prevention.
So you cannot divide people up into people who care about what reality actually is and people who care about other things like society's approval -- everyone cares a bit about both
While that's true there's a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don't think it's good to try to act against truthseeking norms like you do here.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-08-01T01:47:38.673Z · LW(p) · GW(p)
If people in charge follow the epistemology that you propose I think there's a good chance that humanity won't survive this century because someone thinks taking a 1% chance to destroy humanity isn't a problem.
You did not understand the proposal. Let's analyze a situation like that. Suppose there is a physics experiment which has a 99% chance to be safe, and a 1% chance to destroy humanity. The people in charge ask, "Should we accept it as a fact that the experiment will be safe?"
According to our previous stipulations, the utility of believing that it is safe will be 0.99, minus the disutility of believing that it is safe when it is not, so a total of 0.98, considering only the elements of truth and falsehood.
But presumably people care about not destroying humanity as well. Let's say the utility of destroying humanity is negative 1,000,000. Then the total utility of treating it as a fact that the experiment will be safe will be 0.98 - (0.01 * 1,000,000), or in other words, -9,999.02. Very bad. So they will choose not to believe it. Nor does that mean that they will believe the opposite: this would a utility of -0.98, which is still negative. So they will choose to believe neither, and simply say, "It would probably be safe, but there would be too much risk of destroying humanity, so we will not do it." This is presumably the result that you want, and it is also the result from following my proposal.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn't sent out any applications. That's pretty stupid "practical advice".
This should be analyzed in the same way. You do not choose to say, "This will definitely not be accepted," because you will not maximize your utility that way. Instead, you say, "This will probably not be accepted, but it might be."
In other words, you seem to think that I was proposing a threshold where if something has a certain probability, you suddenly decide to accept it as a fact. There is no such threshold, and I did not propose one. Depending on the case, you will choose to treat something as a fact, when it will maximize your utility to do so. Thus for example when you look out the window and see rain, you say, "It is raining," rather than "It is probably raining," because the cost of adding the qualification in every instance is greater than the benefit of just being right about the rain, given the small risk of being wrong and the harmlessness of that in most cases.
While that's true there's a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don't think it's good to try to act against truthseeking norms like you do here.
I am in favor of truth and truthseeking norms, and I most definitely did not "try to act against truthseeking norms" as you suggest that I did. I am against the falsehood of asserting that you care only about truth. No truthseeker would claim such a thing; but a status seeker might claim it.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-08-01T07:58:29.897Z · LW(p) · GW(p)
I am in favor of truth and truthseeking norms, and I most definitely did not "try to act against truthseeking norms" as you suggest that I did.
If you value those norms there no reason to say "But even if there hadn't been, anyone saying that those people were not human would deservedly get called a racist" and defend that notion.
I am against the falsehood of asserting that you care only about truth.
That feels to me like a strawman. Who made such an assertion?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-08-02T00:47:50.510Z · LW(p) · GW(p)
If you value those norms there no reason to say "But even if there hadn't been, anyone saying that those people were not human would deservedly get called a racist" and defend that notion.
If someone says that Jews are not human, he would deservedly be called a racist. That has nothing to do with attacking truthseeking norms, because the claim about Jews is utterly false. The same thing applies to the situation discussed.
That feels to me like a strawman. Who made such an assertion?
I did not know you were talking about the discussion of racism. I thought you were talking about the fact that I said that other terms in your utility function besides truth should affect what you do (including what you treat as a fact, since that is something that you do.) That seems to me a reasonable interpretation of what you said, given that your main criticism seemed to be about this.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-08-02T06:26:28.530Z · LW(p) · GW(p)
If someone says that Jews are not human, he would deservedly be called a racist.
Communication always focuses on a subset of the available facts. You can make a choice to focus on influencing other people to believe certain things by appealing to rational argument. Here you made the choice to influence other people by appealing to the social desirability of holding certain beliefs.
Making that choice damages truth-seeking norms.
Whether someone "deserve" something is also a moral judgement and not just a statement of objective facts.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-08-02T14:55:05.286Z · LW(p) · GW(p)
I think it might be more obvious to someone that saying that Jews are not human deserves moral opprobrium, than that Jews are human. If you are not a moral realist, you might think this is impossible, but I am a moral realist, and I don't see any reason why the moral statement might not be more obvious. In particular, I think it would be likely be true for many people in the case discussed. In that case, there is no reason not to bring it up in a discussion of this kind, since it is normal to lead people from what is more obvious to what is less obvious. And there is nothing against truthseeking norms in doing that.
I suspect that you will disagree, but your disagreement would be like a conservative economist saying "minimum wages are harmful, so if you propose minimum wages you are hurting people." The person proposing minimum wages might in fact be hurting people, but this is definitely not what they are trying to do. And as I said originally, I was not attacking truthseeking or truthseeking norms in any way. (And I am not saying that I am wrong in fact in this way either -- I am just saying that you should not be attacking my motives in that way.)
Replies from: ChristianKl↑ comment by ChristianKl · 2017-08-02T15:09:26.549Z · LW(p) · GW(p)
My charge isn't about motives but about effects.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-08-02T15:12:28.151Z · LW(p) · GW(p)
Fine. I disagree with your assessment.
↑ comment by Lumifer · 2017-07-27T16:59:11.357Z · LW(p) · GW(p)
no difference in principle
I think there is. One big difference is that the algorithm you need to follow to get to the store is clear, simple, and known. But you don't know which algorithm to follow to make yourself believe some arbitrary thing.
It is a question of being practical.
I see absolutely no practical problems in labeling my beliefs as "pretty sure it's true", "likely true", "more likely than not", etc. I do NOT prefer to 'say and think, "this is how it is."'
So you cannot divide people up into people who care about what reality actually is and people who care about other things like society's approval -- everyone cares a bit about both.
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
You explicitly said:
Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it.
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
How much manipulation of your utility function will be necessary to make you truly love the Big Brother?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-28T03:04:20.575Z · LW(p) · GW(p)
But you don't know which algorithm to follow to make yourself believe some arbitrary thing.
Actually, I do. I already said that believing something is basically the same as treating it as a fact, and I know how to treat something as a fact. Again, I might not want to treat it as a fact, but that is no different from not wanting to go to the store: the algorithm is equally clear.
I see absolutely no practical problems in labeling my beliefs as "pretty sure it's true", "likely true", "more likely than not", etc. I do NOT prefer to 'say and think, "this is how it is."'
Your comment history contains many flat out factual claims without any such qualification. Thus your revealed preferences show that you agree with me.
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
I agree that there is such a gradient, but that is quite different from a black and white division into people who care about truth and people who don't, as you suggested before. This is practically parallel to the discussion of the binary belief idea: if you don't like the binary beliefs, you should also admit that there is no binary division of people who care about truth and people who don't.
You explicitly said:
Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it.
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
First of all, that was a toy model and not a representation of my personal opinions, which is why it started out, "But suppose you also think there is an 80% chance..." If you are asking about my real position on that gradient, I am pretty far into the extreme end of caring about truth. Far enough that I refuse to pronounce the falsehood that I don't care about anything else.
Second, it is unfair even to the toy model to say that it gives zero utility to believing what is true. It assigns a utility of 1 to believing a truth, and therefore 0.8 to a probability of 80% of believing a truth. But the total utility of believing something with a probability of 80% is less, because that probability implies a 20% chance of believing something false, which has negative utility. Finally, in the model, the person adds in utility or disutility from other factors, and ends up with a overall negative utility for believing something that has an 80% chance of being true. I.e. not "truth" and not "zero utility." In particular, to the degree that it is true or probably true, that adds utility. Believing a falsehood with the same consequences, in this model, would have even lower utility.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-28T17:03:56.504Z · LW(p) · GW(p)
believing something is basically the same as treating it as a fact, and I know how to treat something as a fact
Not quite. The whole point here is the rider - elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Your comment history contains many flat out factual claims without any such qualification
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
you should also admit that there is no binary division of people who care about truth and people who don't.
Sure, I'll admit this :-)
It assigns a utility of 1 to believing a truth
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all -- thus the Big Brother.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-29T01:58:05.630Z · LW(p) · GW(p)
your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is "the person treats this claim as a fact." It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it -- but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all -- thus the Big Brother.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.
↑ comment by Thomas · 2017-07-23T15:39:02.715Z · LW(p) · GW(p)
I am not afraid of being called a racist at all. And I am not even afraid of not being called a racist, either.
I am just saying, that the biological evolution is happening all the time, also now. And not only in some "ancestral past". This limiting the evolution to the times before going out of Africa is just silly. Being a human is a matter of a degree. This goes for me, for you and for the nearest dead Neanderthal in a local museum or for a still un-excavated one.
Now, if you only have some DNA, it's difficult to say how intelligent or strong a particular dog was. Or a particular human. We may have some ancient DNA, but we still can't interpret it fully.
Replies from: username2↑ comment by username2 · 2017-07-24T07:39:45.234Z · LW(p) · GW(p)
What is the selection pressure now?
Replies from: Lumifer, Thomas↑ comment by Lumifer · 2017-07-24T15:14:13.709Z · LW(p) · GW(p)
As Thomas said, many kinds of.
One kind is favouring believing in a very traditional variety of religion (Orthodox Judaism, conservative Catholicism) and not being an environmentalist :-P
Replies from: username2↑ comment by Thomas · 2017-07-24T08:23:27.976Z · LW(p) · GW(p)
Many pressures, of course. How well you deal with diseases, accidents, how successful are you in spreading your genes around...
Many, many selection pressures, no doubt about that. The idea, that there is no pressure anymore since the end of WWII or since any other date - is just plain silly.
Still, that view is a kind of mainstream. For a polite society, the evolution has stopped long ago. But a polite society itself is a kind of selection pressure, too.
comment by tukabel · 2017-07-24T15:06:21.843Z · LW(p) · GW(p)
and remember that DEATH is THE motor of Memetic Evolution... old generation will never think differently, only the new one, whatever changes occur around
Replies from: Manfred↑ comment by Manfred · 2017-07-24T18:01:52.063Z · LW(p) · GW(p)
First, I checked out the polling data on interracial marriage. Every 10 years the approval rating has gone up by ~15 percentage points. I couldn't find a concise presentation of the age-segregated data from now vs. in the past, but 2007 and 1991 were available, and they look consistent with over 80% of the opinion change being due to old people dying off. This surprised me, I expected to see more evidence of people changing their mind.
Now look at gay marriage.. It's gained at ~18 points per 10 years. This isn't too different from 15, so maybe this is people dying off too. And indeed it seems to be mostly the case - except the in the last 10 years, where the gains don't follow the right age pattern, indicating that of 18 points of gain, about 40% may actually involve people changing their minds.
comment by DanArmak · 2017-07-22T14:16:27.439Z · LW(p) · GW(p)
I feel the briefness of history is inseparable from its speed of change. Once the agricultural revolution got started, technology kept progressing and we got where we are today quite quickly - and that's despite several continental-scale collapses of civilization. So It's not very surprising that we are now contemplating various X-risks: to an external observer, humanity is a very brief phenomenon going back, and so it's likely to be brief going forward as well. Understanding this on an intuitive level helps when thinking about the Fermi paradox or the Doomsday Argument.
comment by ReaganJones1 · 2017-08-16T00:59:32.587Z · LW(p) · GW(p)
Hello everyone, I am from USA. I am here to share this good news to only those who will seize this opportunity.I read a post about an ATM hacker and I contacted him via the email address that was attached in the post. I paid the required sum of money for the blank card I wanted and he sent the card through UPS Express Delivery Shipment, and I got it in 3days. I got it from him last week and now I have withdrew $50,000 USD. The blank ATM card is programmed in a way that it can withdraw money from any ATM machine around the world. Now I have so much money to put of my bills with the help of the card. I am really happy I met Mr.Esa Perez who helped me with this card because I've heard about this card long ago but I had no means of getting it until I came across Mr. Esa Perez. To contact him, you can send him a mail or visit his website. Email: unlimitedblankatmcard@gmail.com Website: http//:unlimitedblankatmcard.webs.com