Posts
Comments
Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected!
I agree that this is a step in the right direction, but I want to elaborate why I think this is hard.
It is my impression that many utopians stay loyal to their chosen tactics that are supposed to closen the utopia even after the efficiency of the those tactics come into question. My hypothesis for why such thing can happen is that, typically, the tactics are relatively concrete whereas goals they are supposed to achieve are usually quite vague (e.g. "greater good"). Thus when goals and tactics conflict the person who tries to reconcile them will find it easier to modify the goals than the tactics, perhaps without even noticing that the new goals may be slightly different from the old goals, since due to vagueness the old goals and the new goals overlap so much. Over time, goals may drift into becoming quite different from the starting point. At the same time, since tactics are more concrete it is easier to notice changes in them.
I suspect that in your case we might observe something similar, since it is often quite hard to pinpoint exactly what underlying features of human psychology make a certain idea compelling.
really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves
I agree that science, probability and game theory put constraints on how hard problems of politics could be solved. Nevertheless, I suspect that those constraints, coupled with vagueness of our desires, may turn out to be lax enough to allow many different answers for most problems. In this case this idea would help to weed out a lot of bad ideas, but it may not be enough to choose among those the rest. In another case, those constraints may turn out to be too restrictive to satisfy a lot of desires people find compelling. Then we would get some kind of impossibility theorem and the question which requirements to relax and what imperfections to tolerate.
From doing this internet propaganda in the early years of the internet, I learned how to do propaganda. You don't appeal to emotion, or to reason, or anything. You just SHOUT. And REPEAT, and explain the position, and let the reader defend it for himself.
In the end, most readers agree with you (if you are right), but they will come up to you, much as you did, and say "While you are right, I see that, you are doing yourself a disservice by being so emotional--- you aren't persuasive...."
But I persuaded this reader! The fact is, I am persuasive, and maximally so. When there is a hostile political environment, if a paper is called "bullshit" or "pseudoscience", you need to first MOCK the idiots calling it that, so as to establish a level playing field. That means calling them "douchebag", "fuckwit", "turd-brain", etc, so that both you and the other person sound like children fighting in the playground, no authority.
Then you need to state the objective case (after the name-calling and cussing, or simultaneously), and then wait. If you are objectively right, people will sort it out on their own time, you don't have to do anything. The people who didn't sort it out will say "oh my, there's a controversy" and will keep an open mind.
It's classic propaganda techniques, and it can be used for good as easily as it can be used for evil. Of course, when calling people idiots for not agreeing with material that is called crackpot, you had better be careful, because if you are not right about the material, if it is crackpot, you are gone for good. The main difficulty is evaluating the work well, understanding it fully, and making sure that it is not crackpot, before posting the first cussword.
I have found it interesting and thought provoking how this quote basically inverts the principle of charity. Sometimes, for various reasons, one idea is considered much more respectable than the other. Since such unequal playing field of ideas may make it harder for the correct idea to prevail, it might be desirable to establish a level playing field. In situations when there are two people who believe different things and there is no cooperation between them, the person who holds the more respectable opinion can unilaterally apply the principle of charity and thus help to establish it.
However, the person who holds the less respectable opinion cannot unilaterally level a playing field by applying the principle of charity, therefore they resort to shouting (as the quote describes) or, in other contexts, satire, although just like shouting it is often used for other, sometimes less noble purposes.
To what extent do you think these two things are symmetrical?
When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).
A clunky solution: right click on "context" in your inbox, select "copy link location", paste it into your browser's address bar, trim the URL and press enter. At least that's what I do.
Different subjects do seem to require different thinking style, but, at least for me, they are often quite hard to describe in words. If one has an inclination for one style of thinking, can this inclination manifest in seemingly unrelated areas thus leading to unexpected correlations? This blog posts presents an interesting anecdote.
I have taken the survey.
I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa.
When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.
On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement.
It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths.
There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.
(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn't be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)
I think that one very important difference between status games and things that might remind people of status game is how long they are expected to stay in people's memory.
For example, I play pub quizzes and often I am the person who is responsible for the answer sheet. Due to strict time limits, discussion must be as quick as possible, therefore in many situations I (or another person who is responsible for the answer sheet) have to reject an idea a person has came up with based on vague heuristic arguments and usually there is no time for long and elaborate explanations. From the outside, it might look like a status related thing, because I had dismissed a person's opinion without a good explanation. However, the key difference is that this does not stay in your memory. After a minute or two, all these things that might seem related to status are already forgotten. Ideally, people should not even come into picture (because paying attention to anything else but the question is a waste of time) - very often I do not even notice who exactly came up with a correct answer. If people tend to forget or not even pay attention whom a credit should be given, also, if they tend to forget cases where their idea was dismissed in favour of another person's idea. In this situation, small slights that happened because discussion should be as quick as possible are not worth remembering, one can be pretty certain that other people will not remember them either. Also, if "everyone knows" they are to be quickly forgotten, they are not very useful in status games either. If something is forgotten it cannot be not forgiven.
Quite different dynamics arise if people have long memories for small slights and "everyone knows" that people have long memories for them. Short memory made them unimportant and useless for status games, but in the second case where they are important and "everyone knows" they are important, they become useful for social games and therefore a greater proportion of them have might have some status related intentionality behind them and not just be random noise.
Similarly, one might play a board game that might things that look like social games, e.g. backstabbing. However, it is expected that when figures go back to the box, all of that is forgotten.
I think that what differentiates information sharing and social games is which of those are more likely to be remembered and which one of them is likely to be quickly forgotten (and whether or not "everyone knows" which is most likely to forgotten or remembered by others). Of course, different people might remember different things about the same situation and they might be mistaken about what other people remember or forget - that's how a culture clash might look like. On the other hand, the same person might tend to remember different things about different situations, therefore people cannot be neatly divided into different cultures, but at the same time frequency of situations of each type seems to be different for different people.
http://www.tue-tm.org/snijders/papers_on_prediction/Grove_Meehl_1996.pdf
I am really not a sociologist, so someone correct me if I what I'll say is totally wrong, but it seems to me that there are at least two quite distinct types of religion (and a continuum of possibilities in-between), the first one consisting of those religions where "religion" religion (gods, clergy, etc.) is almost one and the same thing as something like civil religion of a community (for example, if you found out that a tribe adds various religious chants to their local "judical process" which otherwise is very similar to a Western judical process you would not hesitate to call it a religious ritual, even though chanting part may be inessential), and another where those are two different things. In my mind the first type roughly corresponds to paganism, and the second one to religions similar to Christianity. I think that religions of the first type may be useful to "grease the wheels" of society (especially not a very sophisticated one), and its leaders may not even be that interested in spreading it, except for personal gains. However, note that this is a vague guess, I would need to read much more about pagan societies to understand if it is at least partially the case. It is unclear to me what does religions of the second type do, because greasing the wheels of society is covered by a civic religion.
Another possible benefit of some types of religion is providing some incentives to get some (although not all) things correct. If you believe that god judges you whether or not you your thinking about the world is correct, you might feel and behave as if you have "skin in the game" and thus you might end up with higher motivation to avoid deceiving yourself (and others) for personal gain [1]. In many contemporary societies personal belief in god seems isolated from most beliefs that have practical consequences. If your society practices trial by combat, and your belief in god makes you willing to fight against a stronger person believing that you will win just because you are in the right, your belief will have practical personal consequences. However, in a contemporary society belief in god is usually harmful only indirectly. Thus, a question is, does feeling that you have "skin in the game of believing the truth about god's creation" lead to enough correct beliefs so as to outweigh having incorrect beliefs about god? I think that it is likely that at least in some cases might do. In addition to that perhaps in some cases religious beliefs might be personally helpful if they are harmless and they displace potentially dangerous beliefs who are new and have not yet shed their most extreme parts.
I think that in both cases religion/belief in god is not strictly a necessity, but in some cases it may (or may not) turn out to be somewhat useful if there are no better alternatives available at that time.
[1] I don't like it. Also, I guess that it is probably not very good for society in the long run.
I have read somewhere that all else being equal dialogues attract people's attention better than monologues, at least on television. Perhaps in some cases some ideas (including old sequence posts, especially more controversial ones) could be presented as Socratic dialogues, o perhaps, if a post is being written collaboratively by more than one person, one could write a position and the others (or two) could ask inquisitive questions or try to find holes in his or her argument. You would think that having comments already covers that, and in a sense it is indeed similar to having two waves of comments. However, in this case, the post that is saw by most people has already covered at least a few objections and thus is of relatively higher quality. Secondly, this allows "debate" posts that do not present any clear conclusion and contain only arguments for different positions (where does the controversy lies is often an interesting and informative question). Thirdly, I conjecture that is psychologically more pleasant to be nitpicked by one or two people (who you already know they were explicitly asked to do that) than a lot of commenters at once. You could call this series "Dialogues concerning (human) rationality" or something like that.
Of course, not all posts should be written as dialogues (e.g. some more technical ones might be difficult to structure this way).
Assuming this trend exists (I haven't noticed it) I think that in addition to that we also have a fact that reaching higher hanging fruit requires better tools.
Checking "Enable Anti-Kibitzer" in Preferences already does that.
If lack of progress is what causes negative emotions, then it seems to me that another possible reason why startup founders might have mood swings is that they usually build one startup at a time. Therefore, if you are not making any progress, you are not making any progress at all. John Conway advises that mathematicians should work at several problems at once in order to avoid such mood swings:
Work at several problems at a time. If you only work on one problem and get stuck, you might get depressed. It is nice to have an easier back-up problem. The back-up problem will work as an anti-depressant and will allow you to go back to your difficult problem in a better mood. John told me that for him the best approach is to juggle six problems at a time.
Startup founders may not have such luxury, but perhaps at least in some cases it is possible to structure things in a similarly disjunctive way, where you can note the problems in a part A, but avoid despair by noticing that at least you are making progress in part B (of course, if different people are responsible for parts A and B, the problem might remain for each one of them). Averaging might make your progress look closer to linear in a certain sense.
In an ideal (although not very realistic) scenario LW could have a karma denominated prediction market. However, that would require a lot of work to implement.
It seems to me that one reason why some people behave irrationally is that they start implicitly thinking about themselves in terms of a particular identity, particular archetype. If people of that archetype tend to be bad at a X and one is also bad at X, one might not feel the irresistable urge to fix it, even though intellectually one might agree that it would be better if they fixed it.
In a university setting, at least at the beginning, two such archetypes are "hard working (but not necessarily talented) student" and "talented, but lazy student". You might even observe a negative correlation in your university. However, most likely this is a result of Berkson's bias, because people who are both hard-working and talented are probably studying at a more prestigious university than yours, therefore you don't meet them. Thus whenever you notice that you are very talented yet work very little you should not think about how efficient you are, instead think in terms of not using your potential to the fullest.
If you are able to find people who are passionate about learning about similar things as you, team up with them (an example), create your own unofficial book club or your own unofficial seminar.
Try to be strategic. Use Paul Graham's heuristic "always produce". Write a diary where you can log what you have learned and what you still don't know.
Currently, most of the advice is very abstract and a lot of it is common sense. Perhaps someone from countries with a lot of prestigious universities (US, UK, Germany, Switzerland, etc.) could post more concrete advice, for example links or step-by-step instructions how to apply to a university, how to apply for scholarships and internships in your particular location, what other useful educational resources do you know? Even if such advice is not as generalizable as the more abstract advice, it might still be useful for some people, and even people from different locations to whom the exact wording of an advice might not be directly applicable might still find enough similarities that reading it is still useful for them.
You can start copy/pasting interesting things to LessWrong discussion even if they are not from Eliezer's Facebook page. For example, LessWrong could have "Best of" threads (similar to Reddit's "Best of" subreddit) where people could post the most interesting threads or comments they have found elsewhere (this is different from "Rationality Quotes" threads).
I remember reading the idea expressed in this quote in an old LW post, older than Haidt's book which was published in 2012, and it is probably older than that.
In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.
Correctness is essential, but another highly desirable property of a mathematical proof is its insightfulness, that is, whether they contain interesting and novel ideas that can later be reused in others' work (often they are regarded as more important than a theorem itself). These others are humans and they desire, let's call it, "human-style" insights. Perhaps if we had AIs that "desired" "computer-style" insights, some people (and AIs) would write their papers to provide them and investigate problems that are most likely to lead to them. Proofs that involve computers are often criticized for being uninsightful.
Proofs that involve steps that require use of computers (as opposed to formal proofs that employ proof assistants) are sometimes also criticized for not being human verifiable, because while both humans make mistakes and computer software can contain bugs, mathematicians sometimes can use their intuition and sanity checks to find the former, but not necessarily the latter.
Mathematical intuition is developed by working in an area for a long time and being exposed to various insights, heuristics, ideas (mentioned in a first paragraph). Thus not only computer based proofs are harder to verify, but also if an area relies on a lot of non human verifiable proofs that means it might be significantly harder to develop an intuition in that area which might then make it harder for humans to create new mathematical ideas. It is probably easier understand the landscape of ideas that were created to be human understandable.
That is neither to say that computers have little place in mathematics (they do, they can be used for formal proofs, generating conjectures or gathering evidence for what approach to use to solve a problem), nor it is to say that computers will never make human mathematicians obsolete (perhaps they will become so good that humans will no longer be able to compete).
However, it should be noted that some people have different opinions.
My intuition is that this is one of those cases where given t "evaluation on the left side of t" and "evaluation on the right side of t" give different results. It seems to me that at any given time decision is made about future actions (and not the past), thus "evaluation on the left side of t" seems to be more important and it is the one that makes me reluctant to play this game. It seems to me that using "evaluation on the right side of t" (in cases where they differ) might give some strange results, e.g. murder having no victims.
It seems that left side of t and right side of t differs whenever there is different number of people on both sides. E.g. if you make an exact copy of a person and their entire memory, the "left identity" and "right identity" (perhaps there are better terms) intuitively seem to become two different things.
It is relatively easy to understand the situation when one person owes money to another person, having borrowed it before. It is also not much more difficult to understand the situation when one person owes another person a compensation for damages after being ordered by court to pay it. Somewhat more vague is a situation when there is no court involved, but the second person expects the first one to pay for damages (e.g. breaking a window), because it is customary to do so. All these situations involve one person owing a concrete thing, and the meaning of the word "owes" is (disregarding edge cases) relatively clear.
Problems arise when one tries to go from singular to plural but we still want to use intuition from the usage of singular verb. Quite often, there are many ways to extend the meaning of a singular verb to a plural verb in a way that is still compatible with the meaning of the former. For example, one can extend the singular verb "decides" to a many different group decision making procedures (voting, lottery, one person deciding for everyon, etc.), saying "a group decides" simply obscures this fact.
Concerning the word "owe", even when we have a well defined group of people, we usually prefer to either deal with them separately (e.g. customers may owe money for services) or create a juridical person which helps to abstract a group of people as one person and this allows us to use the word "owe" in its singular verb meaning. There are more ways to extend the meaning of the word "owe" from singular to plural, but they are quite often contentious.
"Western civilizations" is a very abstract group of people. It is not a well defined group of people. It is not a juridical person. It is not a country. It is not a clan. The singular verb "owes" is clearly inapplicable here, and if one wants to use it here, one must extend its meaning from singular to plural. But there seems to be a lot of possible extensions. Therefore one has to resort to other kinds of arguments (e.g. consequentialist arguments, arguments about incentives, etc.) to decide which meaning one prefers. But if that is the case, one can bypass the word "owe" entirely and go to those arguments instead, because that is essentially what one is doing, because words whose meanings one knows only very vaguely probably do not do much in actually shaping the overall argument.
In addition to that "being disadvantaged as a result of imperialism" is very dissimilar from "having a window broken by a neighbour", it is not a concrete thing. The central example of "owing something" is "owing a concrete and well defined thing". Whenever we have a definition that works well for a central example and we want to use it for a noncentral one, we again must extend it and there are often more than one way to extend it (Schelling points sometimes help to choose between all possible extensions, but often there are more than one of them and choice of the extension becomes a subject of debate).
In general, I would guess that if someone argues that an entity as abstract as "western civilizations" owes something to someone, most likely they are either unknowingly rationalizing the conclusion they came to by other means or simply sloppily using an intuition from the usage of the singular verb "owes". I think that the meaning of the word can be extended in many ways, many of which would still be compatible with the meaning of the singular word and some of them would imply "new generations are not responsible for the sins of the past ones", while some of them wouldn't, therefore it is probably better to bypass them altogether and attempt to solve a better defined problem.
Other words where trying to go from singular to plural often causes problems are: "owns", "chooses", "decides", "prefers" (problem of aggregation of ordinal utilities), etc.
Even if there were problems that were solved by such collective action, you should not create plans that rely on things like that happening (by definition, you cannot create a spontaneous action). Your plans should not rely on the problem having to solve itself. Edit: unless the type of spontaneous collective action you need is known to happen often or the problem you want to solve is of the type that are known to often solve themselves.
Actions of the crowd during the fall of the Berlin Wall seems to be an example of an event that fits the description, as it wasn't centrally organized, many people simply tried to make use of opportunity that suddenly appeared due to actions of East German government and other circumstances.
Similarly, I often remind myself that, as a general rule, I should avoid using third person imperative mood in my thinking and speech.
The Visual Perception and Reproduction of Forms by Tachistoscopic Methods, Samuel Renshaw
So far only one person (Randaly) has replied. Does any native speaker want to volunteer? Edit: two people (Randaly and Normal_Anomaly)
I think it is important to answer why people go to LessWrong and whether it is perceived to be primarily a place where you go to improve one's rationality that happens to be an internet forum, or an internet forum where you can read interesting things, such as rationality (I think that experiencing an intellectual journey is somewhere in between, but probably closer to the latter). Because there are a lot of large forums where you can read a lot of interesting things - for example, r/askscience and r/askhistorians have hundreds of thousands subscribers and a lot of contributors who produce huge quantities of interesting content.
A place where people go to improve their rationality can take many forms. It doesn't even have to be a blog, a forum, or a wiki. If I allowed myself to be a bit starry-eyed, I would say, that it would be really interesting, if, for example, LessWrong had its own integrated Good Judgement Project. Or if LW had its own karma (or cryptocurrency) denominated prediction markets. Of course, ideas like these would require a lot of effort to implement.
This link is often useful for obtaining paywalled papers.
Choking Under Social Pressure: Social Monitoring Among the Lonely, Megan L. Knowles, Gale M. Lucas, Roy F. Baumeister, and Wendi L. Gardner
Am I correct to paraphrase you this way: maximizing EX and maximizing P(X > a) are two different problems.
This sounds similar to Coherence theory of truth.
I don't know the exact context of this particular quote, but George Pólya wrote a few books about how to become a better problem solver (at least in mathematics). In that context the quote is very reasonable.
Octopuses are solitary animals, whereas most working animals are social. Which leads to another interesting question - is it possible to breed octopuses to become social animals?
It is better to solve one problem five different ways, than to solve five problems one way
George Pólya, or at least attributed to him, as I am unable to find the exact source, despite its being widely quoted in texts related to mathematics education or problem solving in general.
Of course, this assumes that "probability 0" entails "impossible". I don't think it does. The probability of picking a rational number may be 0, but it doesn't seem impossible.
Given uncountable sample space, P(A)=0 does not necessarily imply that A is impossible. A is impossible iff the intersection of A and sample space is empty.
Intuitively speaking, one could say that P(A)=0 means that A resembles "a miracle" in a sense that if we perform n independent experiments, we still cannot increase the probability that A will happen at least once even if we increase n. Whereas if P(B)>0, then by increasing number of independent experiments n we can make probability of B happening at least once approach 1.
Done. Should I also add a link to the Slovak translation of the book?
Am I correct to rephrase your idea as "People should develop a habit to apply reductio ad absurdum and, to some extent, absurdity heuristic more often"?
http://moscow.sci-hub.bz/bd92d5ea3f64a416c7833533324fafbc/hedges2007.pdf
http://sci-hub.org/ is sometimes useful for finding papers
PMed all of them. Does anyone else also want to volunteer?
Added.
META. LessWrong Welcome threads have changed very little since late 2011. Should something be updated?
A nitpick. This is a paraphrase of the original quote which can be found in The Portrait of Mr. W. H., page 29. The original quote is:
A thing is not necessarily true because a man dies for it.
Law (especially private law) seems to be a better example of a domain where words themselves are very important, because it can hardly be any other way. For example, whether or not something qualifies as a breach of contract is important by itself.
There is a difference between one-off events and events that fall into a certain pattern and narrative. The latter are often remembered as being an example of events that fall into that narrative. In my impression Kennedy's assassination, despite all conspiracy theories surrounding it, is rarely thought of as being a part of a bigger narrative.
What conclusions have you arrived at? Do you think some statements mentioned are incorrect or do you think that something else (e.g. role of Shah Mohammad Reza Pahlavi himself and other people within Iran itself, or ideology of Iranian Revolution and role of people like Ali Shariati, or role of contemporary events in neighbouring countries or something else entirely) should be more emphasized?
In addition to that, perhaps it is because they are much more likely to perform a ritual of praying to the god, whereas rituals of fending of the devil seem to be rare. Thus the latter becomes a vague and remote figure, easy to forget and disbelieve.
Money is not the only thing you can be hungry for, e.g. you can be hungry for fame. Or you can be motivated by thrill of having the call in your life. Some books are described as page-turners or even unputdownable, and I think if one's life has a "well-written" story, then being the main protagonist of that story might make having a dream and following it at least as interesting as those books. For smaller goals, perhaps feeling that your family has a certain stature that you have to maintain would be enough.
One thing that helps motivation is the sense of direction. If children have concrete examples of success (and I think that being from a rich family usually can provide some, although it is not the only source of examples), concrete roadmaps to success, then they might be more confident in trying various ambitious things. If children do not have concrete and vivid examples, they have to rely on abstract ones, and they are less likely to affect them at gut level. E.g. even if, given their talents, intellectually they may understand that A Thing is realistically within their reach, they might still not feel it emotionally, and they might choose safer and less ambitious path not because of intellectual considerations, but because that path feels more realistic, more concrete, and therefore the one that feels less uncertain, less scary. For example, someone from a remote small town who has no concrete examples of success might not even consider applying to prestigious university or company abroad, even though they knew application procedure and knew (at the intellectual level) that, given their talents, they might have a shot, because that might feel too unrealistic and scary.
Kissinger was not rushing to end our conversation that morning, and I had one more message to give him. “Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret.
“I’ve had a number of these myself, and I’ve known other people who have just acquired them, and I have a pretty good sense of what the effects of receiving these clearances are on a person who didn’t previously know they even existed. And the effects of reading the information that they will make available to you.
“First, you’ll be exhilarated by some of this new information, and by having it all―so much! incredible!―suddenly available to you. But second, almost as fast, you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess. In particular, you’ll feel foolish for having literally rubbed shoulders for over a decade with some officials and consultants who did have access to all this information you didn’t know about and didn’t know they had, and you’ll be stunned that they kept that secret from you so well.
“You will feel like a fool, and that will last for about two weeks. Then, after you’ve started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information, which is much more closely held than mere top secret data, you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t... and that all those other people are fools.
"Over a longer period of time―not too long, but a matter of two or three years―you’ll eventually become aware of the limitations of this information. There is a great deal that it doesn’t tell you, it’s often inaccurate, and it can lead you astray just as much as the New York Times can. But that takes. a while to learn.
“In the meantime it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: ‘What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?’ And that mental exercise is so torturous that after a while you give it up and just stop listening. I’ve seen this with my superiors, my colleagues... and with myself.
“You will deal with a person who doesn’t have those clearances only from the point of view of what you want him to believe and what impression you want him to go away with, since you’ll have to lie carefully to him about what you know. In effect, you will have to manipulate him. You’ll give up trying to assess what he has to say. The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.”
It was a speech I had thought through before, one I’d wished someone had once given me, and I’d long hoped to be able to give it to someone who was just about to enter the world of “real” executive secrecy. I ended by saying that I’d long thought of this kind of secret information as something like the potion Circe gave to the wanderers and shipwrecked men who happened on her island, which turned them into swine. They became incapable of human speech and couldn’t help one another to find their way home.
- Daniel Ellsberg, Secrets: A Memoir of Vietnam and the Pentagon Papers
P(A or B) = P(A) + P(B) - P(A and B)
Interesting. Number sequences consist of objects that are all of the same type, they are all on the same level of abstraction. What if you asked him to remember not only numbers themselves, but also things that are on a different level of abstraction, e.g. the structure in which they are arranged? For example, if you made him memorize a tree, you could ask him to traverse it in some order.
Or perhaps you could say a short number sequence that has some non-obvious hidden pattern and ask him to discover that pattern and tell you the next number in the sequence.