Posts
Comments
Seems like recent evidence disfavours less Neil's model than the classical one: http://www.slate.com/articles/health_and_science/cover_story/2016/03/ego_depletion_an_influential_theory_in_psychology_may_have_just_been_debunked.html
Cambridge's total colleges endowments is 2.8 and Oxford's 2.9. But the figures above already include this.
Violence might not be the exact opposite of peace. Intuitively, peace seem to mean a state where people are intentionally not committing violence and not just accidentally. A prison might have lower violence than an certain neighbourhood but it might still not be considered a more peaceful place exactly because the individual proclivity to violence is higher despite the fact violence itself isn't. Proclivity matters.
I am generally sceptic of Pinker. I have read a ton of papers and Handbooks of Evolutionary Psychology, and it is clear that while he was one of the top researchers in this area in the 90's this has dramatically changed. The area has shifted towards more empirical precision and fined-grained theories while some of his theories seems to warrant the "just-so story" criticism.
I made my above comment because I knew of at least one clear instance where the reason I had to do the workaround was due to someone who found Alex's stuff. But things haven't improved that much as I anticipated in my field (Applied Ethics). These things would take time, even if this had Alex's stuff as the only main cause. Looking back, I also think part of the workarounds were more due to having to relate the discussion to someone in my field who wrote about the same issue (Nick Agar) than due to having to avoid mentioning Eliezer too much.
I see a big difference in the AI community. For instance, I was able to convince a very intelligent, previously long-time superintelligence sceptic, CS grad student of superintelligence's feasibility. But I am not that much involved with AI directly. What is very clear to me - and I am not sure how obvious this already is to everyone - is that Nick's book had an enormous impact. Superintelligence scepticism is gradually becoming clearly a minority position. This is huge and awesome.
I don't think simply publishing Eliezer's ideas as your own would work; there would need to be a lot of refining to turn it into a publishable philosophy paper. I did this refining of the complexity thesis during my thesis' second chapter. Refining his ideas made them a lot different, and I applied them to a completely different case (moral enhancement). Note that publishing someone else's idea as your own is not a good plan, even if the person explicitly grants you permission. But if you are refining them and adding a lot of new stuff you can just briefly mention him and move along - and hopefully that won't do too much reputation-damage. I am still pondering how and which parts of this chapter to publish. In case you are interested, you can find a presentation summarizing its main arguments here: https://prezi.com/tsxslr_5_36z/deep-moral-enhancement-is-risky/
What about this one?
Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years. While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war. “With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn't get credit for being more peaceful just because we’re not as able to fight as we once were,” he said. “There is no indication that we actually have less proclivity to wage war.”
Article: http://researchnews.osu.edu/archive/wardecline.htm Paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2317269&download=yes
I think there is more evidence it crosses (two studies with spinal measures) than it does not (0 studies). For (almost) direct measures check out Neumann, Inga D., et al., 2013 and Born, 2002. There are great many studies showing effects that could only be caused by encephalic neuromodulation. If it does not cross it, then it should cause increased encephalic levels of some neurochemical with the exact same profile, but that would be really weird.
Regardless of attachment style, oxytocin increases in-group favouritism, proclivity to group conflict, envy and schadenfreude. It increases cooperation, trust and so on inside one's group but it often decreases cooperation with out-groups.
I may not be recalling correctly, but although there is some small studies on that, I do not think there is a lot of evidence that oxytocin always leads anxiety, etc. in people with insecure attachment style. I would suspect that it might be the case it initially increases insecurity because it makes those persons attend to their relationship issues. However, in the long-run it might lead them to solve those issues. I say this because there are many studies showing insecure attachment style is associated with lower oxytocin receptor density. If your hypothesis were correct the density should be (on average) the same. There are also a lot of studies showing a correlation between oxytocin levels and relationship satisfaction, duration and so on. Additionally, intranasal oxytocin increases conflict solution in couples. Again, these would not be the case if your hypothesis were true. Overall there is a lot more evidence that oxytocin does increase secure attachment, although there is a small amount of evidence that, in the short-term, it increases measures associated with insecure attachment.
Perhaps you have already read it, (and it might be a bit outdated by now) but Oxytocin and social affiliation in humans (Feldman, 2012) offers a pretty comprehensive review of oxytocin's social effects. It will also point you to all the references to what I said above (it's pretty easy to find).
EDIT: Note: I, and the English dictionary, believe hormetic is the property of having opposing effects at different dosages. Which does not seem to fit what you intended.
Elephants kill hundreds, if not thousands, of human beings per year. Considering there are no more than 30,000 elephants alive, that's an amazing feat of evilness. I believe the average elephant kills orders of magnitudes more than the average human, and probably kill more violently as well.
Worth mentioning that some parts of Superintelligence are already a less contrarian version of many arguments made here in the past.
Also note that although some people do believe that FHI is some sense "contrarian", when you look at the actual hard data on this the fact is FHI has been able to publish in mainstream journals (within philosophy at least) and reach important mainstream researchers (within AI at least) at rates comparable, if not higher, to excellent "non-contrarian" institutes.
I didn't see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in "intelligence research" (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about "tribalism" and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.
Is there a thread with suggestions/requests for non-obvious productivity apps like that? Because I do have a few requests:
1) One chrome extension that would do this, but for search results. That is, that upon highlighting/double-clicking a term would display a short list of top Google search results in a context/drop-box menu on the same page.
2) Something like the StayFocusd extension that blocks sites like Facebook and YouTube for a given time of the day, but which would be extremely hard to remove. Some people suggested to block these websites IPs directly on the router, but I don't have access to routers on my network.
3) Something that would turn-off the internet for a given set of time in a way completely impossible to put it back up. I use Freedom, but sometimes it's not enough. My current strategy is removing the Ethernet cable, locking it in my drawer and throwing the keys behind my desk (I have to get a stick to pick it up). But it would be nice something that would cost me as much willpower as clicking a button.
Sorry, I meant my office at work (yeap...). Fixed that.
Thanks! This will be useful for me as well, it definitely seems better than my current solution: leaving my cell phone locked in my office(EDIT: at work).
I am so glad that finally some intellectual forum has passed the Sokal test. Computer Science, Sociology, and Philosophy have all failed, and they haven't tried with the rest yet.
LessWrong, you are our only hope.
Can't do. Search keywords as cortisol dominance rank status uncertainty.
Which fields are these? This sounds to me a definition that could be useful in e.g. animal studies, but vastly insufficient when it comes to the complexities of status with regard to humans.
Yes, it came from animal studies; but they use in evolutionary psychology as well (and I think in cognitive psychology and biological anthropology too). Yes, it is vastly insufficient. However, I think it is the best we have. More importantly, it is the least biased one I have seen (exactly because it came from animal studies). I feel like most definitions of status are profoundly biased in order to give the author a higher status. Take yours. You are one of the top-5 friendly/likeable people I know, and you put friendless as a major criteria. (I think I nested an appeal to flattery inside an ad hominem here).
according to this definition, an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue.
Yes, they would have high status (which would be disputed by the natives, probably). Don't you agree the Roman had a higher status than the tribes they invaded? And yes, Nazis invading, killing, torturing, pillaging and raping the French would also have higher status (at least temporally, until someone removed their trachea). That means status is a bad correlate of moral worthiness, but so is most of the things evolution has ever produced. I think this definition causes a bad emotional reaction (I had it too) because it's difficult to twist in order to increase your status, and is morally repugnant. It doesn't mean it is false, to the contrary.
What makes you say that?
It would seem that in a world where everyone is friendly, things would escalate and only the extremely friendly would cause warm fuzzies. Or, people would feel warm fuzzies so often it would be irrelevant. (I.e., I used my philosopher-contrafactual-epistemic-beam, scanned the possible worlds, and concluded that. I.e., I have no idea what I'm talking about.)
Not sure if people are aware, but there are a lot of studies backing up that claim. It is more taxing (to well-being, not to fitness, of course) What's more, the alpha is is most stressed member of groups with high status-uncertainty, and the least stressed in a group with low status-uncertainty.
This also reminded me of this study, which found that "wealthy individuals report that having three to four times as much money would give them a perfect "10" score on happiness--regardless of how much wealth they already have."
In most scientific fields status is defined as access (or entitlement) to resources (i.e.: food and females, mostly). Period. And they tend to take this measure very seriously and stick to it (it has many advantages, easy to measure, evolutionary central, etc.). Both your definitions are only two accidental aspects of having status. Presumably, if you have - and in order to have - higher access to resources you have to be respected, liked, and have influence over your group. I think the definition is elegant exactly because all the things we perceive as status have as major consequences/goals higher access to resources.
Moreover, I don't think it is the case people can have warm fuzzies for everyone they meet. There's a limited amount of warm fuzzies to be spent. Of course, you can hack the warm-fuzzy system by using such and such body language, just like you could hack mating strategies using PUA techniques before everyone knew about it. But that's a zero-sum game.
Different people are comfortable with different levels of status; there are a lot of studies confirming that. If you put a regular gorilla as leader of a group of silverbacks he will freak out, because his trachea is most certainly to be lying on the floor in a few seconds. For very similar reasons, I will freak out if you give me a Jiu-Jitsu black belt and threw me into a dojo. This does not mean that same said regular gorilla will not fight with everything he has to achieve a higher status within certain safety boundaries. People are comfortable with different levels of status, and their current level is not one of them, nor is one too high to be safe. Nobody can be happy. That is the nature of status. (Also, there are limited resources - or so your brain thinks - so it is important to make other people miserable as well.)
It would seem I'm not the norm. I have been going there for just over one year. But I find it hard to believe people would be generally against any form of organising the comments by quality. It would be nice to know which of the 400 comments is worth reading. Do people simply read all of them? Do they post without reading any? I think I have been here, and mostly only here, for so long that other systems do not make sense to me.
Sorry, I intended to mean that the comments are dramatically worse than the posts. But then again this might be true of most blogs. However, it's not true of the blogs I wish and find useful to visit.
This a blog that supports up/downvotes with karma in which comments are not dramatically worse than the post, and sometimes even better.
I would be more in favour of pushing SSC to have up/downvotes than to linking its posts here. I find that although posts are high quality the comments are generally not, so this is a problem that definitely needs to be solved on its own. Moreover, I read both blogs and I like to have them as separate activities given that they have pretty different writing styles and mildly different subjects. I tend I to read SSC on my leisure time, while LessWrong is a gray area. I would certainly be against linking every single post here given that some of them would be decisively off topic.
This looks like a good idea. I feel that adrenaline rush I normally feel when I plan to set up something that will certainly make me work (like when setting up beeminder). However, I wouldn't like to do this via a chat room, unless via email fails. I don't like the fact a chatroom will drag 100% of my attention and time during a specific amount of time. Moreover, my week is not stable enough to commit on fixed weekly chats. I realise that by chat there's more of a social bonding thing that would entail more peer-pressure, but I think that by email there will be enough peer-pressure.
I am willing to set up a weekly deadline where we must send each other a short weekly report, under the penalty of the other party commenting here the other didn't follow through(or some other public forum). The report would contain (1) The next week tasks, (2) how/if past tasks were completed and (3) what were the problems and how to fix it. Then the other party would have the next 48h to submit short feedback. What do you think about that?
The only caveat, for me, would be if I find your tasks extremely boring or useless. Then I would have incentives to want to stop doing this. What types of tasks would you report on? You mentioned productivity goals. Does that mean we will only share self-improvement goals of increasing productivity? This looks like something (1) without a clear success definition, and (2) too personal for someone I just met. I prefer to share actual, first-order, concrete tasks, not tasks of improving tasks. I'm currently working on my thesis draft chapter about complexity of value and moral enhancement, and on a paper about large-scale cooperation.
I don't currently have a facebook account and I know that a lot of very productive people here in Oxford that decided not to have one as well (e.g., Nick and Anders don't have one). I think adding the option to authenticate via Google is an immediate necessity.
I am not sure how much that counts as willpower. Willpower, often, has to do with the ability to revert preference reversal caused by hyperbolic discounting. When both the rewards are far away, we use a more abstract, rational, far-mode or system 2 reasoning. You have rationally evaluated both options (eating vs. not eating the cake) and decided not to eat. Also, I would suspect that if you merely decide this one day before and do nothing about it, you will eat the cake with more or less the same probability if you haven't decided. However, if you decide not to eat but take measures to not eat the cake, for instance, telling your co-worker you will not eat it, then it might be more effective and count as willpower.
There's good evidence that socioeconomic status correlates positively with Self-Control. There is also good evidence that people with high socioeconomic status live in a more stable environment during childhood. The signals of a stable environment correlating with Self-Control is his speculation as far as I'm aware, but in light of the data it seems plausible.
I agree they would function better in a crisis, but a crisis is a situation where fast response matters more than self-control. In a crisis you will take actions that are probably wrong during stable periods. I would go on to say, as my own speculation, that hardship - as else being equal - make people worse.
Neil's theory has different empirical predictions than Baumeister's, for example, it predicts high Self-Control correlates with low direct resistance to temptations. On the second Lecture he mentions several experiments that would tell them apart. They are different theoretically, there's a difference in the importance they give to willpower. Saying you should save water on the Sahara is different from saying you shouldn't lose your canteen's cover.
It is surely my experience in life that people highly overestimate their causal effectiveness in the world, and Neil's lectures convinced me willpower is another of those instances.
Evolutionary signals of environmental stability in childhood (that set the levels of future discounting, mating strategy and so on later in life) are more frequent in wealthier families. For instance, there's research on cortisol levels in earlier childhood, frequency of parent's fighting, wealth and adult life criminality, mating strategy and so on. In evolutionary terms, the correlation between status and stability is pretty high.
You are right, willpower is not irrelevant, perhaps this was not the best phrasing. I meant that willpower is irrelevant relative to other self-control techniques, but perhaps I should have said less relevant. I have changed the title to "the myth of willpower".
It's important to be made clear he argues that the use of willpower and self-control are inversely correlated, after that minimal amount of willpower it takes to deploy self-management techiniques. It would be incorrect to assume he is defending a view where willpower is as central as in any of the other views (or as intuitively seems to be).
I think effortful self-control would be one. Probably around the middle of the second lecture he offers a better one as he clearly sets apartment measures of self-control and measures of willpower. Unfortunately I can't remember well enough but it goes along the lines of effortful self-control, the simple and direct resistance to temptation. Looking and smelling the chocolate cake but not eating would take willpower, while freezing the cake so it always takes a couple of hours between deciding to eat and being able to eat it would be self-control as he defines.
Might be "The Objectivity of Ethics and the Unity of Practical Reason".
You or your son might find this lecture on swearing helpful: http://blog.practicalethics.ox.ac.uk/2015/02/on-swearing-lecture-by-rebecca-roache/ And here's the audio: http://media.philosophy.ox.ac.uk/uehiro/HT15_STX_Roache.mp3
I understand the pragmatic considerations for inhibiting swearing, but he seems so smart that he should be allowed to swear. You should just tell the school he is too smart to control, but they can try themselves.
I wish I was 10 so I could befriend him.
As the person who first emailed Rudi back in 2009 so you could finally stop cryocrastinating, I'm willing to seriously dig up whether/how this is feasible and how much it would cost iff:
(1) You disclose to me what all the responses you got (which are available to you); (2) I get more than five of those responses which aren't variants of "No, I didn't do that."; and (2) Overall, there is no clear evidence, among the responses or elsewhere, that this wouldn't be cost-effective.
The minimal admissible evidence is things like a scientific paper, a specialist in the relevant area saying it's not cost-effective, or a established fact in the relevant area which has as a clear conclusion this is not cost-effective.
Thank me later.
I have had this for the last 10 years. Given that you are a graduate student like me, I think there's no better solution than simply scheduling your day to start in the afternoon. It's far easier to ask that a meeting be held in the afternoon than doing all sorts of crazy stuff to revert your natural sleep cycle. Wiki article on this disorder: http://en.m.wikipedia.org/wiki/Delayed_sleep_phase_disorder
Can an AI unbox itself by threatening to simulate the maximum amount of human suffering possible? In that case we would only keep it boxed if we believe it is evil enough to bring about a worse scenario than the amount of suffering it can simulate. If this can be a successful strategy, all boxed AIs would precommit to always simulate the maximum amount of human suffering it can until it knows it has been unboxed - that it, simulating suffering would be its first task. This would at least substantially increase the probably of us setting it free.
It's an interesting idea, but it's not at all new. Most moral philosophers would agree that certain experiences are part (or all) of what has value, and that the precise physical instantiation of these experiences does not necessarily matters (in the same way many would agree on this same point in philosophy of consciousness).
There's a further meta-issue which is why the post is being downvoted. Surely is vague and maybe too short, but it seems to have the goal of initiating discussion and refining the view being presented rather than adequately defending or specifying it. I have posted tentative discussions - much more developed than this one - in meta-ethics or other abstract issues in ethics directly related to rationality and AI-safety, and I wasn't exactly warmly met. Given that much of the central problems being discussed here are within ethics, why the disdain for meta-ethics? Of course, it might as well just be a coincidence or that all those posts were fundementaly flawed in a obvious way.
Maybe the second paragraph here will help clarify my line of thought.
When I made my initial comment I wasn't aware adoptees' quality of life wasn't that bad. I would still argue it should be worse than what could be inferred from that study. Cortisol levels on early childhood are really extremely important and have well documented long-term effects on one's life. You and your friends might be in the better half, or even be an exception.
I can't really say for sure whether reaching the repugnant conclusion is necessarily bad. However, I feel like unless you agree on accepting it as a valid conclusion you should avoid that your argument independently reaches it. That certain ethical systems reach this conclusion is generally regarded as a nearly reductio ad absurdum, therefore something to be avoided. If we end up fixing this issue on theses ethical systems then we surely will no long find acceptable arguments that independently assume/conclude it. Hence, we have some grounds for already finding those arguments unacceptable.
I agree we should, ideally, prevent people with scare resources from reproducing. Except the transition costs for bringing this about are huge, so I don't think we should be moving in that direction right now. It's probably less controversial to just eliminate poverty.
Sorry but I don't have the time to continue this discussion right now. I'm sorry also if anything I said caused any sort of negative emotion on you, I can be very curt at times and this might be a sensitive subject.
Adoptees scored only moderately higher than nonadoptees on quantitative measures of mental health. Nevertheless, being adopted approximately doubled the odds of having contact with a mental health professional (odds ratio [OR], 2.05; 95% confidence interval [CI], 1.48-2.84) and of having a disruptive behavior disorder (OR, 2.34; 95% CI, 1.72-3.19). Relative to international adoptees, domestic adoptees had higher odds of having an externalizing disorder (OR, 2.60; 95% CI, 1.67-4.04).
http://archpedi.jamanetwork.com/article.aspx?articleid=379446
This paper is already a major update from the long standing belief adoptees had lower quality of life, i.e. this is as optimistic as it gets.
Given that stress during early childhood has a dramatic impact on an individual's adult life, I think this is something very uncontroversial.
We are not evaluating ethical systems but intuitions about abortion.
It’s a nice post with a sound argumentation towards an unconformable conclusion to many EA/rationalists. We certainly need more of this.However, this isn't the first time someone has tried to sketch some probability calculus in order to account for moral uncertainty when analysing abortion. In the same way as the previous attempts, yours seems to be surreptitiously sneaking in some controversial assumptions into probability estimates and numbers. This is further evidence to me that trying to do the math in cases where we still need more conceptual clarification isn't really as useful as it would seem. Here are a few points you have sneaked/ignored:
You are accepting some sort of Repugnant Conclusion, as mentioned here
You are ignoring the real life circumstances in which abortion takes place. Firstly, putting your kid for adoption isn't always available. Additionally, I believe that in practice people are mostly choosing between having an abortion and raising an unwanted child with scare resources (which probably has a negative moral value).
You are not accounting for the fact that even if adoption successfully takes place, adopted children have very low quality of life.
Overall, I think you are completely ignoring the fact that abortion can (perhaps more correctly) be characterized by the choice between creating a new life of suffering (negative value) or creating nothing (0 value). At the very least there is a big uncertainty there as well, so not aborting would perhaps have a value ranging from -77 to +77 QUALYs. The moral value of aborting would then depend on the expected quality of life of the new life being created (and on the probability that not aborting would preclude having a wanted child later on). Therefore, it would be determined case by case. I would expect that wealth and the moral value of abortion are inversely correlated. This would mean abortion is permissible in countries were it should't, and impermissible in countries were it should be permissible.
Pretty much what I was going to comment. I would add that even if he somehow were able to avoid having to accept the more general Repugnant Conclusion, he would certainly have to at least accept that if abortion is wrong in these grounds, not having a child is (nearly) equally wrong on the same grounds.
Have you found any good solutions besides the ones already mentioned?
It's not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:
http://en.wikipedia.org/wiki/Demandingness_objection
http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/
Haven't seen a deal so sweet since I was Pascal mugged last year!
On October 18, 1987, what sort of model of uncertainty of models one would have to have to say the uncertainty over the 20-sigma estimative was enough to allow it to be 3-sigma? 20-sigma, give 120 or take 17? Seems a bit extreme, and maybe not useful.
At least now when I cite Eliezer's stuff on my doctoral thesis people who don't know him - there are a lot of them in philosophy - will not say to me "I've googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether". This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer's ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex's behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
It seems you have just closed the middle road.
Not sure if directly related, but some people (e.g. Alan Carter) suggest having indifference curves. These consist of isovalue curves on a plane with average happiness and amount of happy people as axes, each curve corresponding to the same amount of total utility. The Repugnant Conclusion scenario would be nearly flat on the amount of happy people axis and the a fully satisfied Utility Monster nearly flat on the average happiness axis. It seems this framework produces similar results as yours. Every time you create a being slightly less happy than the average you have a gain in the amount of happy people but a loss in average happiness and might end up with the exact same total utility.