Posts

Bayesian Reasoning - Explained Like You're Five 2015-07-24T03:59:34.901Z
An example demonstrating how to deduce Bayes' Theorem 2015-07-24T03:58:40.955Z
The Just-Be-Reasonable Predicament 2015-07-16T03:17:52.926Z

Comments

Comment by Satoshi_Nakamoto on Bayesian Reasoning - Explained Like You're Five · 2015-07-25T02:58:32.723Z · LW · GW

Ok. Thanks for letting me know. I have removed the first example. I was thinking that it would make it simpler if I started out with an example that didn't look at evidence, but I think it is better without it.

If anyone wants to know the difference between frequency and probability. See the below quote:

“A probability is something that we assign, in order to represent a state of knowledge, or that we calculate from previously assigned probabilities according to the rules of probability theory. A frequency is a factual property of the real world that we measure or estimate. [...] The fundamental, inescapable distinction between probability and frequency lies in this relativity principle: probabilities change when we change our state of knowledge; frequencies do not. It follows that the probability p(E) that we assign to an event E can be equal to its frequency f (E) only for certain particular states of knowledge. Intuitively, one would expect this to be the case when the only information we have about E consists of its observed frequency.” Jaynes, E. (2003), Probability Theory: The Logic of Science, New York, Cambridge University Press, pg. 292

Comment by Satoshi_Nakamoto on An example demonstrating how to deduce Bayes' Theorem · 2015-07-24T11:51:25.577Z · LW · GW

Yes you can. See this site for what I think is a good example of visualizing Bayes' theorem with venn diagrams.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-17T03:30:38.971Z · LW · GW

Good point. Would you say that this is the problem: when you are rational, you deem your conclusions more valuable than those of non-rational people. This can end up being a problem as you are less likely to update your beliefs when they are opposed. This adds the risk that if you make a one false belief and then rationally deduce a plethora of others from it you will be less likely to update any erronous conclusions.

I think that the predicament highlights the fact that going against what is reasonable is not something that you should do lightly. Maybe, I should make this more explicit.

If you are going against the crowd, then there is a good chance that you have made a mistake somewhere in your reasoning and that your conclusion is crazy or does not work. Reasonable things are not normally like this because they need to be circulated and disseminated. If they were crazy or didn't work, then this could not happen. But, this doesn't mean that they are optimal or that they are right.

If you are going against what is reasonable, then this is a serious reason to doubt your beliefs. It is not a reason in and of iteself to believe that something untrue or irrational.

What do you think is the best way to overcome this problem? This is from the post:

How can you tell when you have removed one set of blind spots from your reasoning without removing its counterbalances? One heuristic to counter this loss of immunity, is to be very careful when you find yourself deviating from everyone around you. I deviate from those around me all the time, so I admit I haven't found this heuristic to be very helpful. Another heuristic is to listen to your feelings. If your conclusions seem repulsive to you, you may have stripped yourself of cognitive immunity to something dangerous.

I would add that it is a good idea to try and explain your beliefs to other people, preferably someone you believe is rational and the more people the better. Try to seriously doubt your beliefs and to see them anew. If other people reach the same conclusion, then you can become more sure of your beliefs.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-17T02:54:05.991Z · LW · GW

I agree that this is probably not the best example. The scrub one is better.

I think that "moral" is similar to "reasonable" in that it is based on intutition rather than argument and rationality. People have seen slavery as being "moral" in the past. Some of the reasons for this is false beliefs like that it's natural that some people are slaves, that slaves are inferior beings and that slavery is good for slaves,

I guess I was thinking about it from two points of view:

  • Is it rational to have the moral belief that there should be slaves. A rational person would look at all the supporting beliefs and see if they are themselves rational. For example, are slaves inferior beings. The answer, as we know, is no. In terms of the mass slavery of large portions of people, this has often been due to some characteristic like high levels of melanin for the slaves in America. These characteristics don't make people inferior and they sure don't make people inhuman.
  • With the system set up the way it was, was the alternative to slaves inferior? I am not an expert on this, but I was thinking that the alternative was not inferior. Perhaps, it would have been slower in terms of growth, but America still could have thrived as a nation if the south abolished slavery without war.
Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-16T12:35:23.427Z · LW · GW

I agree that rationality and reasonableness can be similar, but they can also be different. See this post for what I mean by rationality. The idea of it being choosing the best option is too vague.

Some factors that may lead to what others think is reasonable being different from what is the most rational are: the continued use of old paradigms that are known to be faulty, pushing your views as being what is reasonable as a method of control and status quo bias.

Here is are two more examples of the predicament

  • Imagine that you are in family that is heavily religious and you decide that you are an atheist. If you tell anyone in your family you are likely to get chastised for this making it an example of the just-be-reasonable predicament.
  • Imagine that you are a jury member and you are the cause of a hung jury. They tell you: “the guy obviously did it. He is a bad man anyway. How much evidence do you need? Just be reasonable about this so that we can go home”. Now, you may actually be being irrationally under confident or perhaps you are not. The post was about what you should do in this situation. I consider it a predicament because people find it hard to do what they think is the right thing when they are uncertain and when it will cause them social disapproval.

Also, I have updated the below:

The just-be-reasonable predicament occurs when in order to be seen as being reasonable you must do something irrational or non-optimal.

To this to try and more clearly express what I meant:

The just-be-reasonable predicament occurs when you are chastised for doing something that you believe to be more rational and/or optimal than the norm or what is expected or desired. The chastiser has either: not considered, cannot fathom or does not care that what you are doing or want to do might be more rational and/or optimal than what is the default course of action. The predicament is similar to the one described in lonely dissent in that you must choose between making what you to believe to be the most rational and/or optimal course of action and the one that will be meet with the least amount of social disapproval.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-16T12:18:52.934Z · LW · GW

I don't think I was very clear. I meant for this case to be covered under "avoid the issue". As by avoiding the issue you just continue whatever course of action or behaviour you were previously undertaking. I have edited the post to make this a bit clearer.

I thought about this later and think you were right. I have updated the process in the picture.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-16T12:15:12.335Z · LW · GW

Yes. They seem pretty close to me. I think it is a bit different though. I think the bruce article was trying to convey the idea that Bruce was a kind of gaming masochist. That is, he wanted to lose.

An example quote is:

If he would hit a lucky streak and pile up some winnings he would continue to play until the odds kicked in as he knew they always would thus he was able to jump into the pit of despair and self-loathing head first. Because he needed to. And Bruce is just like that.

The difference as I see it is that bruce loses through self sabotage because of unresolved issues in his psyche and the scrub loses through self sabotage because they are too pedantic.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-16T12:14:04.062Z · LW · GW

Good idea. I replaced it with "Why can't you just conform to my belief of what is the best course of action for you here". Thanks.

Comment by Satoshi_Nakamoto on The Just-Be-Reasonable Predicament · 2015-07-16T12:13:10.628Z · LW · GW

Done. Thanks for the suggestion.

Comment by Satoshi_Nakamoto on Rational vs Reasonable · 2015-07-16T03:25:21.284Z · LW · GW

A wrote a post based on this, see The Just-Be-Reasonable Predicament. The just-be-reasonable predicament occurs when in order to be seen as being reasonable you must do something irrational or non-optimal.

Comment by Satoshi_Nakamoto on Rational vs Reasonable · 2015-07-13T10:27:00.282Z · LW · GW

Is this a decent summary of what you mean by 'reasonable': noticeably rational in socially acceptable ways, i.e. you use reasons and arguments that are in accordance with group norms?

A reasonable person:

  • can explain their reasoning
  • is seen as someone who will update their beliefs based on socially acceptable evidence
  • is seen to act in accordance with social norms even when the norms are irrational. This means that their behaviour and reasoning are seen as socially acceptable and/or praiseworthy
Comment by Satoshi_Nakamoto on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-14T07:35:33.902Z · LW · GW

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

Comment by Satoshi_Nakamoto on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-14T07:34:40.510Z · LW · GW

bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.

Comment by Satoshi_Nakamoto on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-13T05:18:02.866Z · LW · GW

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. "ludism" and "relinquishment of dangerous science" is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.

I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.

Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.

I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.

I'm not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.

In survive the catastrophe I would add two extra boxes:

  • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure

  • Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly

I could send money to a charity of your choice.

Send it to one of the charities here.

Comment by Satoshi_Nakamoto on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-12T14:39:09.480Z · LW · GW

I would use the word resilient rather than robust.

  • Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.

  • Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.

I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.

I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.

One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:

  • Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

  • Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.

  • Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

  • Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

I don’t want this post to be too long, so I have just listed the common systems problems below:

  • Policy Resistance – Fixes that Fail

  • Tragedy of the Commons

  • Drift to Low Performance

  • Escalation

  • Success to the Successful

  • Shifting the Burden to the Intervenor—Addiction

  • Rule Beating

  • Seeking the Wrong Goal

  • Limits to Growth

Four additional plans are:

  1. (in Controlled regression) voluntary or forced devolution

  2. uploading human consciousness into a super computer

  3. some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware

  4. dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.

You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.