Posts
Comments
Hi Zvi,
A couple of months ago I wrote a covid-19 risk calculator that's gotten some press, and even translated into Spanish. Here's the link:
https://www.solenya.org/coronavirus
I've updated the calculations to leverage your table for age & preconditions, which were better than what I had. You can check the code for the calculator by clicking on the link near the top of the page. I've also put a link in that code to your article here.
Note that I'm trying to keep the interface ultra-simple. I get a stream of suggestions (e.g. can you add a separate slider for condition x), which if all implemented, will have little effect on the overall outcome, but will overcomplicate the interface, and make the calculator lose its appeal.
Thanks,
Ben
Press:
https://www.tomsguide.com/news/coronavirus-calculator
https://www.news18.com/news/tech/can-the-coronavirus-kill-you-this-website-attempts-to-give-you-the-good-or-bad-news-2539469.html
https://www.quo.es/salud/coronavirus/q2004116668/calculadora-probabilidad-morir-coronavirus/
Also, there's ways of using uranium 238
Sigh.
A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".
FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "suspended judgement" could work? My head hurts reading these comments trying to figure out how each person is using the term "moralize" and I now have to think twice when reading the term on LW, including even your old posts. This is an unnecessary cognitive burden. In any case, my final note here would be to consider that you'd be lucky if your target audience for your upcoming book(s) was anywhere near as sharp as wedrifid. So if he's confused, that's a valuable signal.
people who identify as rationalists they seem to moralize slightly less than average
Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.
Good to see you've morally condoned the 5 second method.
rationalists don't moralize.
don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers
OK, so you're saying that to change someone's mind, identify mental behaviors that are "world view building blocks", and then to instill these behaviors in others:
...come up with exercises which, if people go through them, causes them to experience the 5-second events
Such as:
...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...
Or:
...to feel the temptation to doubt, and to make the choice not to doubt, and to associate alternative procedural patterns such as pausing, prayer...
The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.
I have an image of Eliezer queued up in a coffee shop, guiltily eyeing up the assortment of immodestly priced sugary treats. The reptilian parts of his brain have commandeered the more recently evolved parts of his brain into fervently computing the hedonic calculus of an action that other, more foolish types, might misclassify as a sordid instance of discretionary spending. Caught staring into the glaze of a particularly sinful muffin, he now faces a crucial choice. A cognitive bias, thought to have been eradicated from his brain before the SIAI was founded, seizes its moment. "I'll take the triple chocolate muffin thank you" Eliezer blurts out. "Are you sure?" the barista asks. "Well I can't be 100% sure. But the future of intergalactic civilizations may very well depend on it!"
While I'm inclined to agree with the conclusion, this post is perhaps a little guilty of generalizing from one example - the paragraphs building up the case for the conclusion are all "I..." yet when we get to the conclusion it's suddenly "We humans...". Maybe some people can't handle the truth. Or maybe we can handle the truth under certain conditions that so far have applied to you.
P.S. I compiled a bunch of quotes from experts/influential people for the questions Can we handle the truth? and Is self-deception a fault?.
The chief role of metaethics is to provide far-mode superstimulus for those inclined to rationalize social signals literally.
Ethics and aesthetics have strong parallels here. Consider this quote from Oscar Wilde:
For we who are working in art cannot accept any theory of beauty in exchange for beauty itself, and, so far from desiring to isolate it in a formula appealing to the intellect, we, on the contrary, seek to materialise it in a form that gives joy to the soul through the senses. We want to create it, not to define it. The definition should follow the work: the work should not adapt itself to the definition.
Whereby any theory of art...
merely serves as after-the-fact justification of the sentiments that were already there.
I've gone through massive reversals in my metaethics twice now, and guess what? At no time did I spontaneously acquire the urge to rape people. At no time did I stop caring about the impoverished. At no time did I want to steal from the elderly. At no time did people stop having reasons to praise or condemn certain desires and actions of mine, and at no time did I stop having reasons to praise or condemn the desires and actions of others.
Metaethics: what's it good for...
I believe the primary form of entertainment for the last million years has had plenty of color.
I don't think social influence alone is a good explanation for the delusion in the video. Or more precisely, I don't think the delusion in the video can be explained as just a riff on the Asch conformity experiment.
I'm merely less skeptical that the woman in the video is a stooge after hearing what Nancy had to say. But yes, the anchoring techniques he uses in the video might be nothing but deliberate misdirection.
Interesting. This makes me less skeptical of Derren Brown's color illusion video (summary: a celebrity mentalist uses NLP techniques to convince a woman yellow is red, red is black etc.).
Perhaps the post could be improved if it laid out the types of errors our intuitions can make (e.g. memory errors, language errors, etc.). Each type of error could then be analyzed in terms of how seriously it impacts prevalent theories of cognition (or common assumptions in mainstream philosophy). As it stands, the post seems like a rather random (though interesting!) sampling of cognitive errors that serve to support the somewhat unremarkable conclusion that yes, our seemingly infallible intuitions have glitches.
I dunno Nancy. I mean you start off innocently clicking on a link to a math blog. Next minute you're following these hyperlinks and soon you find yourself getting sucked into a quantum healing website. I'm still trying to get a refund on these crystals I ended up buying. Let's face it. These seemingly harmless websites with unrigorous intellectual standards are really gateway drugs to hard-core irrationality. So I have a new feature request: every time someone clicks on an external link from Less Wrong, a piece of Javascript pops up with the message: "You are very probably about to enter an irrational area of the internet. Are you sure you want to continue?" If you have less than 100000 karma points, clicking yes simply redirects you the sequences.
Speaking of the yellow banana, people do a lot of filling in with color.
One of Dennett's points is the misleading notion that our mind "fills in". In the case of vision, our brain doesn't "paint in" missing visual data, such as the area in our field of vision not captured by our fovea. Our brains simply lack epistemic hunger for such information in order to perform the tasks that they need to.
I've noticed that this account potentially explains how color works in my dreams. My dreams aren't even black and white - the visual aspects are mostly just forms. However, if the color has meaning or emotion, it's there. I recently had a dream where I looked up at the sky, and the moon was huge and black, moving in a rapid arc across the sky then suddenly diving into the Earth causing an apocalyptic wave of dirt to head towards me. The vivid blackness was present, because it meant something to me emotionally. The houses, in comparison, merely had form, but no color. In any case, it seems that the question "Do we dream in color?" can't be answered adequately if using a "filling in" model of the mind.
Daniel Dennett's Consciousness Explained is a very relevant piece of work here. Early in his book, he seeks to establish that our intuitions about our own perceptions are faulty, and provides several scientific examples to build his case. The Wikipedia entry on his multiple drafts theory gives a reasonable summary.
You've articulated some of the problems of a blogroll well. Perhaps the blogroll idea could be evolved into a concept that better fits the needs of this community, while retaining its core value and simplicity:
1) Along side a link could be its controversy level, based on the votes for and against the link. By making the controversy explicit, the link can no longer be seen as a straight-up endorsement.
2) Along side a link could be its ranking based on say only the top 50 users. This would let people explicitly see what the majority vs. the "elite rationalists" thought - an interesting barometer of community rationality.
3) Split the "blogroll" in two - all-time most votes vs. most votes in the last week/month. This would alleviate the problem of staleness that Nancy pointed out. This is also nice because the links could be for not just websites, but any interesting new article.
4) Allow discussion of any link. Comments could warn users of applause lights etc. This is perhaps why the current voting system works well for choosing top posts, despite the problems you point out with majority opinion. A poor post/link can never get past the gauntlet of critical comments.
You could generalize this to the point that ordinary posts essentially become a special case of an "internal link". Anyway, enough about a technical proposal - at this point I'm reluctant to push any harder on this. An impression I have of Less Wrong is that it's somewhat of a walled garden (albeit a beautiful one!) and that such changes would open it up a little, while maintaining its integrity. The resistance people have seems to be rooted in this - a fear of in any way endorsing "inferior intellectual standards". What we should instead be fearful of is not doing everything we can to raise the sanity waterline.
A website has a specific goal that it's trying to uniquely achieve, and a general goal that places it within a community of like-minded websites. Less Wrong's specific goal is to refine the art of human rationality, and its general goal is to raise the sanity waterline. If other websites are successfully raising the sanity waterline, it behooves Less Wrong to link to them.
I don't like this idea. The choice of websites to put on the sidebar is likely to be contentious. What exactly qualifies a website to be endorsed by LW? How should a website be judged considering the various PR implications of endorsing it? Also, who exactly stands behind the endorsement, considering that LW is a group blog?
I agree that there's genuine challenges in selecting which websites to link to, especially for a community blog. But a community blog, if it meets those challenges, actually has the greater potential to choose a good set of links. Less Wrong should strive to have a better set of links than its sister site, Overcoming Bias. These links matter. It's a standard feature of blogs, and for good reason. I've discovered many great websites this way. Unfortunately, never via Less Wrong.
What's more, LW members already have the option to put website links in their profiles, and the websites authored or endorsed by prominent LW contributors are thus already given significant promotion.
While I think high-karma Less Wrong users deserve promotion, it's not the only criteria for which promotion is justified. If there's a great sanity waterline raising website out there, it should be linked to, whether or not there's a high-karma Less Wrong user running it. On my own website I link to Wikipedia's argument fallacy list and cognitive bias list. Without digressing into a debate as to whether Less Wrong should link to these lists too, I'll merely point out that with the criteria you're suggesting, such links would necessarily have zero value. I think JGWeissman's proposal would choose the appropriate value for such links.
Nomination: Common Sense Atheism.
Blogroll / Side Bar Section for Links to Rationality Related Websites. I love Overcoming Bias, but it seems a bit biased that Overcoming Bias is the only other website linked from here.
Reply to this comment with a comment for each website nomination?
Hmm... maybe with this feature new links could be added by users (presuming a minimum karma criteria), and then each link other users could vote up and down, so that the ordering of the list was organic.
Emotional awareness is a skill that can be cultivated, and increases one's agreeableness. Watch a disagreeable person in action and it's pretty obvious that they're not really picking up how other people are reacting to their behavior. Note that it's much easier to see disagreeable behavior is in others than in oneself. The challenge in becoming more agreeable lies partly in seeing oneself as others see you.
if you really want to know how valid a particular idea you've read is, there are quantitative ways to get closer to answering that question.
The ultimate in quantitative analysis is to have a system predict what your opinion should be on any arbitrary issue. The TakeOnIt website does this by applying a collaborative filtering algorithm on a database of expert opinions. To use it you first enter opinions on issues that you understand and feel confident about. The algorithm can then calculate which experts you have the highest correlation in opinion with. It then extrapolates what your opinion should be on issues you don't even know about, based on the assumption that your expert agreement correlation should remain constant. I explained the concept in more detail a while ago on Less Wrong here, but have since actually implemented the feature. Here are TakeOnIt's predictions of Eliezer's opinions. The more people add expert opinions to the database, the more accurate the predictions become.
Note that the website currently requires you to publicly comment on an issue in order to get your opinion predictions. A few people have requested that you should be able to enter your opinion without having to comment. If enough people want this, I'll implement that feature.
one of my dreams is that one day we could develop tools that ... allowed you to estimate how contentious that claim was, how many sources were for and against it... and links to ... tell you about who holds what opinions, and allowed you to somewhat automate the process of reading and making sense of what other people wrote.
That's more or less the goal of TakeOnIt. I'd stress that the biggest challenge here is populating the database of expert opinions rather than building the tools.
An even more ambitious project: making a graph of which studies invalidate or cast doubt on which other studies, on a very big scale, so you could roughly pinpoint the most certain or established areas of science. This would require some kind of systematic method of deducing implication, though.
Each issue on TakeOnIt can be linked to any other issue by adding an "implication" between two issues. Green arrows link supporting positions; red arrows link contradictory positions. So for example, the issue of cryonics links to several other issues, such as the issue of whether information-theoretic death is the most real interpretation of death (which if true, supports the case for cryonics).
I guess the moral is "Don't trust anyone but a mathematician"?
Safety in numbers? ;)
Perhaps it's useful to distinguish between the frontier of science vs. established science. One should expect the frontier to be rather shaky and full of disagreements, before the winning theories have had time to be thoroughly tested and become part of our scientific bedrock. There was a time after all when it was rational for a layperson to remain rather neutral with respect to Einstein's views on space and time. The heuristic of "is this science established / uncontroversial amongst experts?" is perhaps so boring we forget it, but it's one of the most useful ones we have.
To evaluate a contrarian claim, it helps to break down the contentious issue into its contentious sub-issues. For example, contrarians deny that global warming is caused primarily by humans, an issue which can be broken down into the following sub-issues:
Have solar cycles significantly affected earth's recent climate?
Does cosmic radiation significantly affect earth's climate?
Has earth's orbit significantly affected its recent climate?
Does atmospheric CO2 cause significant global warming?
Do negative feedback loops mostly cushion the effect of atmospheric CO2 increases?
Are recent climatic changes consistent with the AGW hypothesis?
Is it possible to accurately predict climate?
Have climate models made good predictions so far?
Are the causes of climate change well understood?
Has CO2 passively lagged temperature in past climates?
Are climate records (of temperature, CO2, etc.) reliable?
Is the Anthropogenic Global Warming hypothesis falsifiable?
Does unpredictable weather imply unpredictable climate?
It's much easier to assess the liklihood of a position once you've assessed the liklihood of each of its supporting positions. In this particular case, I found that the contrarians made a very weak case indeed.
If you have social status, it is worth sparing some change in getting used to not only being wrong, but being socially recognized as wrong by your peers...
Emperor Sigismund, when corrected on his Latin, famously replied:
I am king of the Romans and above grammar.
I know that most men — not only those considered clever, but even those who are very clever and capable of understanding most difficult scientific, mathematical, or philosophic, problems — can seldom discern even the simplest and most obvious truth if it be such as obliges them to admit the falsity of conclusions they have formed, perhaps with much difficulty — conclusions of which they are proud, which they have taught to others, and on which they have built their lives.
— Leo Tolstoy, 1896 (excerpt from "What Is Art?")
Illusory superiority seems to be the cognitive bias to overcome here.
Expert opinions on:
Does money make you happy? (specifically: absolute spending power)
Does relative wealth make us happier than absolute wealth?
Voted up.
if you have to choose between fitting in with your group etc and believing the truth, you should shun the truth.
I think many people develop a rough map of other people's beliefs, to the extent that they avoid saying things that would compromise fitting in the group they're in. Speaking of which:
irrationalists free-ride on the real-world achievements of rationalists
Trying to get to level 4 are we? (Clearly I'm not ;)) Conversely, you could argue that "irrationalists" are better at getting things done due to group leverage and rationalists free-ride of those achievements.
Perhaps it would be a good idea to remember, and keep remembering, and make it clear in your writing, that "women" are not a monolithic block and don't all want the same thing.
A woman who doesn't want a generalization applied to them? :)
For me, understanding "what's really going on" in typical social interactions made them even less interesting than when I didn't.
Merely "tuning in" to a social interaction isn't enough. Subtextual conversations are often tedious if they're not about you. You have to inject your ego into the conversation for things to get interesting.
So if I'm with a bunch of people from my class ... and none of us have any major conflict of interest...
If you were a character in a sitcom I was writing, I'd have your dream girl walk in just as you were saying that.
It seems this post bundled together the CPU vs. GPU theory regarding the AS vs. NT mindset, with a set of techniques on how to improve social skills. The techniques however - and in a sense this is a credit to the poster - are useful to anyone who wants to improve their social skills, regardless of whether the cause of their lack of skill is:
1) High IQ
2) Introversion
3) Social Inexperience
4) AS
5)
A combination of several of these factors might be the cause of social awkwardness. It's possible to place too much importance on looking for a root cause. The immediate cause is simply a lack of understanding of social interaction - the techniques will help anyone develop that understanding.
If you lack that powerful social coprocessor... [you will]...explicitly reason through the complex human social game that most people play without ever really understanding.
Some NTs are somewhat unconscious of the game, but that doesn't mean they don't understand it. I'd argue the most useful definition of "understanding" is that one's brain contains the knowledge - whether one is conscious of it or not - that enables one to successfully perform the relevant task. Any other definition, is quite literally, academic. Furthermore, I'd argue that those best at the game actually become conscious of what is unconscious for most people, such as the degree to which status plays a role in social interaction. This helps them gain an edge over others, such as better predicting the ramifications of gossip, or the ability to construct a joke. A joke that works well socially, often consists of the more socially aware person bringing to the surface an aspect of someone else's self-serving behavior that was previously just under the social group's conscious radar. It would be impossible to construct such jokes without a conscious understanding of the game.
(Most importantly) Find a community of others - who are trying to solve the same problem
If you want to learn social skills, hang out with people who have them. And it's not enough to just hang out - you have to enjoy it and participate. And to be frank, often the easiest way to do that is with alcohol. And don't assume you're so different to other people - why do you think they're drinking?
Thanks for the feedback.
there's a lot of chaff.
Do you mean chaff as in "stuff that I personally don't care about" or chaff as in "stuff that anyone would agree is bad"?
there doesn't seem to be enough activity yet.
Yes, the site is still in the bootstrapping phase. Having said that, the site needs to have a better way of displaying recent activity.
Franklin's quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?
Good call.
to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
No no no! It's vital that the opinions of influential people - even if they're completely wrong - are included on TakeOnIt. John Stuart Mill makes my point perfectly:
...the peculiar evil of silencing the expression of an opinion is... If an opinion is right, [people] are deprived of the opportunity of exchanging error for truth: if wrong, they lose what is almost as great a benefit, the clearer perception and livelier impression of the truth, produced by its collision with error.
P.S. I updated the tag line for Conservapedia from "Encyclopedia" to "Christian Encyclopedia". Thanks for pointing that out.
TakeOnIt records the opinions of BOTH experts and influencers - not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates - not blurs - the distinction between those who have relevant expertise and those who don't (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I'm open to suggestions if you think it could be made clearer than it is.
Voted up because I think AS is a great example of psychological diversity. I'm curious however as to the origin of your belief that AS people are more attracted to decompartmentalization than neurotypicals are.
This is the bunk-detection strategy on TakeOnIt:
- Collect top experts on either side of an issue, and examine their opinions.
- If '1' does not make the answer clear, break the issue down into several sub-issues, and do '1' for each sub-issue.
Examples that you alluded to in your post (I threw in cryonics because that's a contrarian issue often brought up on LW):
Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity
In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is that you enter your opinions on several issues that you strongly believe you've got right. It will then detect the cluster of experts you typically agree with, and extrapolate what your opinion should be for other issues, based on the assumption (explained here) that you should continue to agree with the experts you've previously agreed with.
You can see the predictions it's made for my opinions here. One of the predictions is that I should believe homeopathy is bunk.
Actions speak louder than words. A thousand "I love you"s doesn't equal one "I do". Perhaps our most important beliefs are expressed by what we do, not what we say. Daniel Dennett's Intentional Stance theory uses an action-oriented definition of belief:
Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.
What we say or think we believe is vulnerable to distortion due to a desire to signal, and due to the fact that our consciousness only has partial access to our brain's map of reality. By ignoring our words and looking at our actions we can bypass these shortcomings. It's probably a major reason as to why prediction markets work so well.
Status is relative to a group, and each group values different skills and traits. We gravitate towards groups where we have value.
Yes. But if the topic of something you're not good at comes up, what are you going to do? Various strategies:
a) Downplay the importance of the thing that you're not good at.
b) Change the subject.
c) Make a joke about totally sucking at that thing (while keeping the literal subject the same, it changes the implicit subject to the social ability of making other people laugh).
d) Mention a close relative, friend, or partner who's really good at that thing (increasing status by affiliation).
I think I may even do e) which is to show enthusiastic appreciation for the thing I'm not good at, possibly sprinkled with demonstrating surprising knowledge of the thing I'm expected to not know about.
UPDATE: f) Riffing on 'c', liken yourself to a low status group. HT Barack Obama
Being dismissive of things you're not good at is beneficial to your status.
Based on your idea and the discussion that followed, I've added the feature to flag a quote as fictional.
On the question pages, fictional quotes are put in their own group:
Is information-theoretic death the most real interpretation of death?
On the expert pages, fictional quotes are flagged per quote:
H.P.Lovercraft's Opinions
Fictional quotes are discounted from the prediction analysis.
Unless you count things like "on top of stalagmites" as sitting methods.
From Blackadder:
Aunt: 'Chair'? You have chairs in your house?
Edmund: Oh, yes.
Aunt: [slaps him twice] Wicked child!!! Chairs are an invention of Satan! In our house, Nathaniel sits on a spike!
Edmund: ...and yourself...?
Aunt: I sit on Nathaniel -- two spikes would be an extravagance.
Being understimulated is intolerable to me.
How can something intolerable be understimulating? Sure, I'm equivocating on the type of stimulation you're referring to here, but in the spirit of luminosity, shouldn't we be interested in exploring the places in our minds that we're afraid to go? I'm not recommending you step into a sensory deprivation chamber (or have your brain emulated without hooking up the inputs and outputs), but experimenting with meditation seems like a potentially luminous activity, even if you did it with the modest goal of simply getting a peek into what it's about.
P.S. Nice post; I also enjoyed some of the earlier posts in the sequence; I think at times I wanted to see a concrete application of the abstractions, which this post did.
Construal Level Theory (the one used to explain near-far mode) can also be used to explain self-control. One of the creators of the theory explains in a paper here%20construal%20levels%20and%20self%20control.pdf) and another paper is here.
The authors propose that self-control involves making decisions and behaving in a manner consistent with high-level versus low-level construals of a situation. Activation of high-level construals (which capture global, superordinate, primary features of an event) should lead to greater self-control than activation of low-level construals (which capture local, subordinate, secondary features). In 6 experiments using 3 different techniques, the authors manipulated construal levels and assessed their effects on self-control and underlying psychological processes. High-level construals led to decreased preferences for immediate over delayed outcomes, greater physical endurance, stronger intentions to exert selfcontrol, and less positive evaluations of temptations that undermine self-control. These results support a construal-level analysis of self-control.