Posts
Comments
Also, two of your recommendations are.
Our top political figures can make powerful, courageous and politically unpopular statements that all Muslims are not to blame for this attack, and that we should not radicalize the rest through unthoughtful policies. We can reach out to Muslim leaders who condemned the Brussels attacks and work together against the radicals.
Of course, this is what western leaders have been doing for the past 15 years, and it doesn't seem to be working. Turns out Muslims are more inclined to get their theology from their own imams then from western politicians, and reaching out to "moderate" Muslim leaders results in Muslim leaders that are moderate in English but radical in Arabic.
Original thread here.
So you wrote an article that starts with a false premise, namely the implicit claim that the primary cause of radicalization is western police presence. It then proceeds to use numbers you appear to have taken from thin air in an argument whose only purposes appears to be signalling "rationality" and diverting attention from said false premise. It final reaches a conclusion that's almost certainly false. This is supposed to promote rationality how?
Original thread here.
I believe the problem people have with this is that it isn't actually helpful at all. It's just a list of outgroups for people to laugh at without any sort of analysis on why they believe this or what can be done to avoid falling into the same traps. Obviously a simple chart can't really encompass that level of explanation, so it's actual value or meaningful content is limited.
Thinking about it some more, I think it could. The problem with the chart is that the categories are based on which outgroup the belief comes from. For a more rational version of the diagram, one could start by sorting the beliefs based on the type and strength of the evidence that convinced one the belief was "absurd".
Thus, one could have categories like:
no causal mechanism consistent with modern physics
the evidence that caused this a priori low probability hypothesis to be picked out from the set of all hypotheses has turned out to be faulty (possibly with reference to debunking)
this hypothesis has been scientifically investigated and found to be false (reference to studies, ideally also reference to replications of said studies)
Once one starts doing this, one would probably find that a number of the "irrational" beliefs are actually plausible, with little significant evidence either way.
Original thread here.
However "alternative" medicine cannot be established using the scientific method,
Care to explain what you mean by that assertion. You might want to start by defining what you mean by "alternative medicine".
The scientific method is reliable -> very_controversial_thing
And hardcoded:
P(very_controversial_thing)=0
Then the conclusion is that the scientific method isn't reliable.
I the point I am trying to make is that if an AI axiomatically believes something which is actually false, then this is likely to result in weird behavior.
I suspect it would react by adjusting it's definitions so that very_controversial_thing doesn't mean what the designers think it means.
This can lead to very bad outcomes. For example, if the AI is hard coded with P("there are differences between human groups in intelligence")=0, it might conclude that some or all of the groups aren't in fact "human". Consider the results if it is also programed to care about "human" preferences.
Probably, that seems it be their analogue of concluding Tay is "Nazi".
Actually I am a bit surprised, the post got two downvotes already. I was under the impression that LW would appreciate it given it being a site about rationality and all.. I've been reading LW for quite some time but I hadn't actually posted before, did I do something horribly wrong or anything?
This list falls into a common failure mode among "skeptics" attempting to make a collection of "irrational nonsense". Namely, having no theory of what it means for something to be "irrational nonsense" so falling back on a combination of absurdity heuristic and the belief's social status.
It doesn't help that many of your labels for the "irrational nonsense" are vague enough that they could cover a number of ideas many of which are in fact correct.
Edit: In some cases I suspect you yourself don't know what they're supposed to mean. For example, you list "alternative medicine". What do you mean by this. The most literal interpretation is that you mean that all medical theories other than the current "consensus of the medical community" (if such a thing exists) are "irrational nonsense". Obviously you don't believe the current medical consensus is 100% correct. You probably mean something closer to "the irrational parts of alternative medicine are irrational", this is tautologically true and useless. Incidentally it is also true (and useless) that the irrational parts of the current "medical consensus" are irrational.
Original thread here.
And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)
The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.
Probably, they said something about that in the wired article. One can still get an idea for its level of intelligence.
BTW, the twitter account is here if you want to see the things the AI said for yourself.
Original thread here.
It might help to take an outside view here:
Picture a hypothetical set of highly religious AI researchers who make an AI chatbot, only to find that the bot has learned to say blasphemous things. What lessons should they learn from the experience?
Original thread here.
I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics
Well neither does image recognition software. Neither does Google's search algorithm.
Well, in the Hilary case my reason for favoring the "bribe" explanation is that presumably the person who first made the accusation was more familiar with the specifics of the situation than I am.
In the Senators case, anti-insider trading laws are written in such a way that they don't apply to Congressmen and their staff. So that makes that explanation more likely.
So is your definition of "Irrational Nonsense" anything that disagrees with your opinion?
Original thread here.
And it still borders on bigotry
Ok, define "bigotry", also explain why "bigotry" as you just defined it is a bad thing.
Original thread here.
I said it was a bias. I still think it's a bias.
Really, which bias are you referring to.
Which is the same old shit of atheists being amoral because why would they have morals without incentives.
Well, a lot of atheists are amoral.
Original thread here.
The bias of believing (to various degrees) that your in-group is children of the light and your out-group is the spawn of darkness.
Whose ingroup around here do you think is Christians?
Negative beliefs about a group of people based on weak evidence,
And then you use this as an excuse to ignore any negative evidence.