Posts
Comments
So the rent it pays is something like when you put up a sign "DO NOT DISTURB - AUTOMATED GUN TURRETS INSIDE" on your apartment. It doesn't change the fact that you have to pay rent, or the dollar amount, but it means you can spend less effort on negotiating late payments, because the landlord will wait just a bit longer before kicking you out.
In terms of anticipation, it's something like "if I did it over again, I'd still have done the same thing, because my behaviors were dictated by circumstance". I guess it's sort of like timeless decision theory.vs causal decision theory; in TDT, you can trust that your past self has modified you to make the correct decision in the future, while a causal decision theorist would not trust such an assumption.
I don't think it's pretending. They are both human, after all. Explaining away the difference in statements as situational bias (math is unpopular, so Grothendeick perceived it as hard; business is popular, so Jobs perceived it as easy) does not seem irrational to me. If we dissolve the question of blame, then why should praise be any different?
I think you've pinpointed the difficulty I ran into before I decided to just post it; Grothendieck was self-effacing, while Jobs was self-aggrandizing, and there isn't really enough in common for Jobs to say something that would present him in the same light as Grothendieck. Even the quotes I did have, from other people, were kind of stretching the bounds of credulity (e.g., the Samsung quote is preceded by 'the consumer perspective is that...', and then he goes on to talk about re-marketing android as being better than apple). I guess was trying to compensate for the 'worse-than-average effect' by using a different industry, but there should be a better/quicker way to recalibrate your self-image.
Steve Jobs, and life/death in general. Nothing too serious, just a prompt for discussion.
My post was not really designed to be followed, but more to use the collective makeup of LW as a human computational cluster / search engine / associative memory. I actually got a real response (ChristianKI), which I'm very happy about. I guess SolveIt can ban me if he really wants. (A guy named 'SolveIt'. People make themselves less human supposedly to benefit others - is it artificial intelligence or just people pretending they're intelligent? is gwern a robot? o_O)
The last paragraph was just brain-dumping my expectations for the conference. I was a bit off the mark. No sex and very little music, although there was a lot of alcohol and people shouting at each other. I'm guessing no LW'er besides me would consider it anything other than a waste of time.
There was also the question before Jobs's quote: when can an AI self-modify safely, or when can a human legally do mind-altering substances? Maybe there was an answer in the AIXI series; I didn't really get an answer at the conference, although I did see various effects of alcohol first-hand.
I was there 2005-2007.
I just rewrote your post to be about me / Steve Jobs instead of you / Grothendieck, see http://lesswrong.com/r/discussion/lw/lpp/a_long_comment/. Maybe you can understand what I mean about the post "mostly not being about math".
As a former (3-time) MathPath student, I have the feeling I've seen you before. I must admit that it's only a feeling.
As far as Grothendieck goes, I think he is simply channeling Buddhism's concept of beginner's mind. Nothing new, really. Most quotes are null-content "yes I'm a human" type things. The main problem I have with your post is that none of it is math-specific; take out the "math" repetition, the few mentions of calculus etc., and it's simply a generic description of ability.
And UnBBayes does computational analyses, similar to Flying Logic, except it uses Bayesian probability.
There not good evidence for the claim that reading a list of a bunch of biases improves your decision making ability. See Eliezers discussion on the hindsight bias: http://lesswrong.com/lw/il/hindsight_bias/
I checked that the procedure accounts for the biases. Hindsight bias is avoided by computing uncertainty using a regression analysis. Availability bias is avoided by using a large database with random sampling. Etc. I haven't gone through all of them, but so far the biases I've looked at can't affect the decision outcome because the human isn't directly involved in those stages of computation.
Someone wearing a black belt is probably going to be perceived as more aggressive
And there's even a study on black uniforms that shows they increase perceived aggression.
Changes in confidence and body language.
This page says martial arts training increases dominance, as you say. On the other hand, that study also says that martial arts training decreases (observed) aggression. This study says perceived aggressiveness is highly correlated with proportion of mixed-martial-arts fights won, which I interpret as also meaning that martial arts training increases perceived aggression before a fight (since martial training ought to result in winning more martial arts fights). So it looks like martial arts training encourages controlling the aggressiveness signal, suppressing it in some non-fighting cases and enhancing it in competition. Or else the actual aggression levels decreased because the willingness to fight was communicated more clearly and thus people chose to fight less because their estimates of the costs rose.
We went through many separate points and at the moment I don't know how to pull them in a good way together into one post. If you see a decent way feel free.
My general writing strategy is as follows: I go through source material, write down all the quotes/facts that seem useful into a bullet list, then sort alphabetically, then reorder and group the bullets, then rewrite the sub-bullets into paragraphs, then reorder the paragraphs, then remove the list formatting and add paragraph formatting, then add a title and introduction. (The conclusion is just more facts/quotes). I've practiced this on a couple of my required-because-core essays and they've gotten reasonable marks (B+ / A- level depending on how nice the teacher is).
The problem is that you assume that you know the relevant biases.
Wikipedia has a list; I've checked a few of them, and the rest are on my TODO list. I have that page watched so if there's a new bias I'll know.
There are often cases where you don't know why someone screws up. There are domains where it's easier to get knowledge about how much people screw up than understanding the reasons behind screwups.
Information is produced regardless, and often recorded (see e.g. Gwern's Mistakes page). So long as I myself don't screw up, which, assuming that I always follow my decision procedure and my decision procedure is correct, I won't, then it doesn't matter.
Fear produces fight or flight responses. People often fight out of fear. Aggression often comes out of weakness.
OK, but I was talking about "perceived willingness to be aggressive" (signal), not aggression (action).
A karate black belt is dominant but usually not aggressive. Taller people get payed more money because being tall is a signal for social dominance.
Someone wearing a black belt is probably going to be perceived as more aggressive, the same way someone idly cleaning their fingernails with a sharp knife might be. Similarly if a person adopts something recognized as a fighting stance. Not certain about tall people, that's probably something else besides perceived aggressiveness, e.g. "My parents were rich and could feed me a lot".
This has gone on long enough that it might be worth summarizing into a post... do you want to write it or should I?
Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true. Accepting reality for what it is helps to have an accurate perception of reality.
I'm not certain where you see conflation. I have separate storage areas for things to think about, evidence, actions, and risk/reward evaluations. They interact as described here. Things I hear about go into the "things to think about" list.
Only once you understand the territory should you go out and try to change things.
The world is changing so I must too. If the apocalypse is tomorrow, I'm ready. I don't need to "understand" the apocalypse or its cause to start preparing for it. IF I learn something later that says I did the wrong thing, so be it. I prefer spending most of my time trying to change things than sitting in a room all day trying to understand. Indeed, some understanding can only be gained through direct experience. So I disagree with you here.
If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.
The decision procedure I outlined above accounts for most biases; you're welcome to suggest revisions or stuff I should read.
I spoke about incentives. [...] The incentives go towards "spectual".
You didn't, AFAICT; you spoke about "inherent biases". I think my point still stands though; averaging over "all information everywhere" counteracts most perverse incentives, since perversion is rare, and the few incentives left are incentives that are shared among humans such as survival, reproduction, etc. In general humans are good at that sort of averaging, although of course there are timing and priming effects. Researchers/bloggers are incentivized to produce good results because good results are the most useful and interesting. Good results lead to good products or services (after a 30 year lag). The products/services lead to improved life (at least for some). Improved life leads to more free time and better research methods. And the cycle goes on, the end result AFAICT is a big database of mostly-correct information.
Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.
His post is entitled "Why Forgetting Can Be Good" and his mention of Anki is limited to "I’m skeptical of the value of an SRS for most domains of knowledge." If he then recommends Anki for learning vocabulary, this changes relatively little; he's simply found a knowledge domain where he found SRS useful. Different studies, different conclusions, different contributions to different decisions.
It's not about remembering it's about being able to make estimates even when you aren't sure.
You're never sure, so why mention "even when you aren't sure", since it's implied? Striking that out...
It's not about remembering it's about being able to make estimates.
Estimation comes after the evidence-gathering phase. If you have no evidence you can make no estimates. Fermi estimation is just another estimation method, so it doesn't change this. If you have no memory, then you have no evidence. So it is about remembering. "Those who cannot remember the past are condemned to repeat it".
And you can calibrate your error intervals.
If you have no estimates you can't have error intervals either. Indeed, you can't do calibration until you have a distribution of estimates.
Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.
It looks like the central word is definitely dominance. Stringing the top words into a sentence I get "Sports teams wear red to show dominance and it has an effect on referees' performance". I guess I was going off of the Mandrill story where signs of dominance are correlated with willingness to be aggressive. This study says dominance and threat are emphasized by wearing red, where "threat" is measured by "How threatening (intimidating, aggressive) did you feel?". Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.
The comments do focus on status, so I guess you have a point. But I generally skip over the comments when an article is linked to. And the status discussion was in the comments of Overcoming Bias post, so by no means central.
One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings. I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.
Would you be referring to, among others, this study? Unfortunately... it still looks like experimental psychology, so again I have to plead lack of statistics.
The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.
I've mostly been reading Army / DoD studies, which have a different funding model. But I guess cancer will become relevant eventually (preferably later rather than sooner).
Side note: does LW have a "collapse threads more than N levels deep" feature like reddit? It probably should have triggered a few replies ago, so I didn't post on the wrong child...
For example, I don't want a red car because I don't want to get pulled over by the cops all the time.
The car story appears to be a myth nowadays, but that could just be due to the increased use of radar guns and better police training. Radar guns were introduced around the 1950's so all of their policemen quotes are too recent to tell.
I thought I had written all I could. What sort of things should I add?
Once upon a time I tried using what I could coin "quicklists". I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I'd been doing, and some policemen asking me pointed questions. (I don't believe any drugs were involved, just sleep deprivation, but I can't say for certain).
More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it's hard to quantify the exact effect.
I'm not certain these techniques actually count as "doublethink", since the contradiction is between my "internal" beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.
NB: Retrieving your original beliefs after you've been going off of the ones from the paper is left as an exercise to the student
Humans are biased to overrate bad human behavior as a cause for mistakes.
If a crocodile bites off your hand, it's generally your fault. If the hurricane hits your house and kills you, it's your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn't give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I'm involved in goes wrong).
The decent thing is to orient yourself on whether similar studies replicate.
Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it's a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it's widely cited as a building block of someone's theory, or if there's a book on it.
Regardless every publish-or-perish paper has an inherent bias to find spectacular results.
Right, people only publish their successes. There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to "all information everywhere" then that bias (mostly) goes away, and the real problem is finding anything at all.
Let's say wearing red every day.
So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time? For example, I don't want a red car because I don't want to get pulled over by the cops all the time. Similarly for most results; they're very limited in scope, of the form "if X then Y" or even "X associate with Y". Many times, Y is irrelevant, so I don't need to even consider X.
Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.
Sure, but if I'm involved with a case then I'll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.
That's fixable by training Fermi estimates.
You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.
It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.
Ah, social science. I need to take more courses in statistics before I can comment... so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).
Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren't exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.
Given replication rates of scientific studies a single study might not be enough.
Enough for what? My question is whether my hair stylist saying "Shaving makes the hair grow back thicker." is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.
Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.
I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).
No need to get people to wash their hands before you do a business deal with them.
I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)
I don't have experience with those, but I'll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg
The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are "infectious" (see Mass hysteria), so this is not really a good method unless you're mostly isolated from the main population.
The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives' tale (which, being old, is probably useful as a belief, regardless of truth value).
Finally, the defeatist answer is that you can't actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you'll just rationalize them away regardless. If you keep notes on your beliefs, you'll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there's not much hope for change.
For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.
It's already random; replacing randomness with more randomness doesn't help except for mixing in new tasks. I went through ~50 tasks today, so it's not really that bad; just that I feel like some tasks should have more time dedicated. "Is putting animals in captivity an improvement?" is not the sort of question you want to dash off in 2 minutes. (Final answer: list of various animal rights groups).
The real problem is the list keeps growing longer; I'm starting to run into O(n^2) behavior in my text editor. It's not really designed for handling a FIFO queue. I've been staring at TaskWarrior, which might be adapted for doing the things I want.
So more recently I've been using a big 6000-line text file, it has all of my TODO's as well as some URL's. I randomized the order a while ago and now I just go through them. I've stalled on that (actually doing things is hard, particularly when they're vague things like "post story"), so I might go back to feed reading; I experimented a bit with TinyTinyRSS but Feedly is probably a better choice.
Well, taxation has the threat of violence, in that if you don't pay your taxes you will eventually be caught and sentenced to jail for tax evasion... hmm, maybe I should do a "The definition of X" series. They should really be wiki pages though, not posts...
It sounds like we're in violent agreement here. I've already verified experimentally that writings by mathnerd314_1998 are clear to mathnerd314_2009. My brain doesn't change that much over time.
Instead, I have two other questions:
can mathnerd314_2014 understand Gunnar_Zarncke_2014 on the same level he understands mathnerd314_1998?
If both mathnerd314_2014 and mathnerd314_2020 independently write down definitions, will they be textually different?
My hypothesis is that #1 is "no", because internal organization of concepts varies dramatically from person to person, and that #2 is "yes", because people do change over time.
Well, there's a tricky thing in mathematics called "the law of excluded middle". Using the law, you can e.g. prove that a implies b is logically equivalent to (not a) or b. It also lets you do existence proofs by proving it isn't possible for there to be no examples. So in classical logic every statement is confused with its double negation.
I generally try to use intuitionistic logic though, where a->b is not logically equivalent to anything else and double negations have to be written out. You do have , but that only goes one direction and results in a weaker statement. If you look at my other reply with an intuitionistic frame of mind, then you'll see that the "only" is an implication, with no negation in sight.
Indeed, it's very depressing. I doubt I'll ever be able to understand other people, but I do have some hope for internal consistency in my usage (so mathnerd314_February2014 writes things that seem comprehensible to mathnerd314_July2020). I've collected my early 1990's writings and they all sort of "click" into place, in that I understand them well enough to rewrite them word-for-word. Perhaps by writing down definitions for my words I'll be able to see how the concepts have evolved over time (or that they haven't changed).
You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong
You missed the context, which is when someone claims "This can't be wrong." Rule #1 clearly states the definition can be wrong. On the other hand, there are different levels of wrongness. Sure, these rules are most likely wrong and incomplete, but they are more correct than having no rules at all. And the reason definitions aren't the best way to give semantics is because we already have a better semantics, namely the "similarity cluster". (Map is not the territory, etc.) But forcing someone to give a definition that follows these 17 rules gives you the similarity cluster, and avoids pretty much all of Eliezer's 37 ways of using words wrongly (See the superscripts!). There might be other ways of using words wrongly, but they're going to be either obvious or so subtle that nobody can catch them anyway.
As for why I wrote this article, it's simple: I need definitions of the things on my GTD list (in particular, I need a direct specification of what constitutes a "physical, visible action" for the next-actions list), and I recalled an EY post about definitions which was his 37 ways. But that was all about how to do it wrongly, and one of my tasks is "don't think negatively", so I rewrote it. It was and is sitting in my WhatIs:definition zim wiki page. I posted it here to get some commentary and maybe someone checking that I interpreted his points correctly, which I've been getting. (Thanks guys! :-))
Indeed, but one of Eliezer's points was that mathematical objects, e.g. the set of prime numbers, don't need labels. I can write without giving it a name at all, or just call it P.
If you require every word you use to have a definition, and ensure the definitions follow these rules, and then consistently use the words according to their definitions, then it follows that you are using the words correctly and not wrongly.
So I guess that could be the maxims for writing:
know the definition of every word you use
ensure the definitions follow these 17 rules
use words according to their definition
A prime number n is a number whose only factors are multiplicative units and n*a multiplicative unit (and these two sets are distinct). Typical examples include 2, 3, 5, 7 and 11. Less-typical examples include -2 and 1+i; they are often excluded from consideration in mathematics.
I took the survey; apparently I get karma for that? :-)
I look at it in terms of efficiency; sites like reddit are simply inefficient ways to communicate. They are good at making random connections and exploring new subject areas, and that is what I use them for: if I have heard of a subject, but don't know about it, I find a subreddit on the topic and subscribe.
As a tool for discourse, however, there is much to be desired; communication is lossy (many posts are simply not upvoted enough to be seen) and interspersed with noise (unrelated but "viral" posts). Google Reader is almost lossless; it maintains a buffer of all messages for 30 days and then archives them so that they are available in search results but not as unread items. If one reads every feed to its end at least once a month, then no data is lost.
Google Reader thus has the odd effect of making one commit; either you are subscribed to a feed, and read every post of it, or you are not, and never see it anywhere. I have not used Reader for more than a few years, and furthermore haven't conducted a survey of its users, but I would theorize that Reader users as a whole are more productive/active than non-users as a result. Perhaps it could be a question on the next LessWrong survey.