Posts
Comments
Thanks!
I think my point is different, although I have to admit I don't entirely grasp your objection to Nostalgebraist's objection. I think Nostalgebraist's point about rules being gameable does overlap with my example of multi-agent systems, because clear-but-only-approximately-correct rules are exploitable. But I don't think my argument is about it being hard to identify legitimate exceptions. In fact, astrophysicists would have no difficulty identifying when it's the right time to stop using Newtonian gravity.
But my point with the physics analogy is that sometimes, even if you actually know the correct rule, and even if that rule is simple (Navier-Stokes is still just one equation), you still might accomplish a lot more by using approximations and just remembering when they start to break down.
That's because Occam's-razor-simple rules like "to build a successful business, just turn a huge profit!" or "air is perfectly described by this one-line equation!" can be very hard to apply to synthesize into specific new business plans or airplane designs, or even to make predictions about existing business plans or airplane designs.
I guess a better example is: the various flavours of utilitarianism each convert complex moral judgements into simple, universal rules to maximize various measures of utility. But even with a firm belief in utilitarianism, you could still be stumped about the right action in any particular dilemma, just because it might be really hard to calculate the utility of each option. In this case, you don't feel like you've reached an "exception" to utilitarianism at all -- you still believe in the underlying principle -- but you might find it easier to make decisions using an approximation like "try not to kill anybody", until you reach edge-cases where that might break down, like in a war zone.
You might not even know if eating a cookie will increase or decrease your utility, so you stick to an approximation like "I'm on a diet" to simplify your decision-making process until you reach an exception like "this is a really delicious-looking / unusually healthy cookie", in which you decide it's worth dropping the approximation and reaching for the deeper rules of utilitarianism to make your choice.
In spirit I agree with "the real rules have no exceptions". I believe this applies to physics just as well as it applies to decision-making.
But, while the foundational rules of physics are simple and legible, the physics of many particles -- which are needed for managing real-world situations -- includes emergent behaviours like fluid drag and turbulence. The notoriously complex behaviour of fluids can be usefully compressed into rules that are simple enough to remember and apply, such as inviscid or incompressible flow approximations, or tables of drag coefficients. But these simple rules are built on top of massively complex ones like the Navier-Stokes equation (which is itself still a simplifying assumption over quantum physics and relativity).
It is useful to remember that the equations of incompressible flow are not foundational and so will have exceptions, or else you will overconfidently predict that nobody can fly supersonic airplanes. But that doesn't mean you should discard those simplified rules when you reach an exception and proceed to always use Navier-Stokes, because the real rules might simply be too hard to apply the rest of the time and give the same answer anyway, to three significant figures. It might just be easier in practice to remember the exceptions.
Hence, when making predictive models, even astrophysicists will think of gravity in terms of "stars move according to Newton's inverse square law, except when dealing with black holes or gravitational lensing". They know that it's really relativity under the hood, but only draw on that when they know it's necessary.
OK, that's enough of an analogy. When might this happen in real life?
One case could be multi-agent, anti-inductive systems... like managing a company. As soon as anyone identifies a complete and compact formula for running a successful business it either goes horrifyingly wrong, or the competitive landscape adapts to nullify it, or else it was too vague of a rule to allow synthesizing concrete actions. ("Successful businesses will aim to turn a profit").
You're welcome! And I'm sorry if I went a little overboard. I didn't mean it to sound confrontational.
X and ~X will always receive the same score by both the logarithmic and least-squares scoring rules that I described in my post, although I certainly agree that the logarithm is a better measure. If you dispute that point, please provide a numerical example.
Because of the 1/N factor outside the sum, doubling predictions does not affect your calibration score (as it shouldn't!). This factor is necessary or your score would only ever get successively worse the more predictions you make, regardless of how good they are. Thus, including X and ~X in the enumeration neither hurts nor helps your calibration score (regardless of whether using the log or the least-squares rule).
I agree that eyeballing a calibration graph is no good either. That was precisely the point I made with the lottery ticket example in the main post, where the prediction score is lousy but the graph looks perfect.
I agree that there's no magic in the scoring rule. Doubling predictions is unnecessary for practical purposes; the reason I detail it here is to make a very important point about how calibration works in principle. This point needed to be made, in order to address the severe confusion that was apparent in the Slate Star Codex comment threads, because there was widespread disagreement about what exactly happens at 50%.
I think we both agree that there should be no controversy about this -- however, go ahead and read through the SSC thread to see how many absurd solutions were being proposed! That's what this post is responding to! What is made clear by enumerating both X and ~X in the bookkeeping of predictions -- a move for which there is no possible objection, because it is no different than the original prediction, nor is does it affecting a proper score in any way -- is that there is no reason to treat 50% as though it has special properties that are different than 50.01%, and there's certainly no reason to think that there is any significance to the choice between writing "X, with probability P" and "~X, with probability 1-P", even when P=50%.
If you still object to doubling the predictions, you can instead choose to take Scott's predictions and replace all X all with ~X, and all P with 1-P. Do you agree that this new set should be just as representative of Scott's calibration as his original prediction set?
The calibration you get, by the way, will be better represented by the fact that if you assigned 50% to the candidate that lost, then you'll necessarily have assigned a very low probability to the candidate that won, and that will be the penalty that will tell you your calibration is wrong.
The problem is the definition of more specific. How do you define specific? The only consistent definition I can think of is that a proposition A is more specific than B if the prior probability of A is smaller than that of B. Do you have a way to consistently tell whether one phrasing of a proposition is more or less specific than another?
By that definition, if you have 10 candidates and no information to distinguish them, then the prior for any candidate to win is 10%. Then you can say "A: Candidate X will win" is more specific than "~A: Candidate X will not win", because P(A) = 10% and P(~A) = 90%.
Since the proposition "A with probability P" is the exact same claim as the proposition "~A with probability 1-P"; since they are the same proposition, there is no consistent definition of "specific" that will let one phrasing be more specific than the other when P = 50%.
"Candidate X will win the election" is only more specific than "Candidate X will not win the election" if you think that it's more likely that Candidate X will not win.
For example, by your standard, which of these claims feels more specific to you?
A: Trump will win the 2016 Republican nomination
B: One of either Scott Alexander or Eliezer Yudkowsky will win the 2016 Republican nomination
If you agree that "more specific" means "less probable", then B is a more specific claim than A, even though there are twice as many people to choose from in B.
Which of these phrasings is more specific?
C: The winner of the 2016 Republican nomination will be a current member of the Republican party (membership: 30.1 million)
~C: The winner of the 2016 Republican nomination will not be a current member of the Republican party (non-membership: 7.1 billion, or 289 million if you only count Americans).
The phrasing "C" certainly specifies a smaller number of people, but I think most people would agree that ~C is much less probable, since all of the top-polling candidates are party members. Which phrasing is more specific by your standard?
If you have 10 candidates, it might seem more specific to phrase a proposition as "Candidate X will win the election with probability 50%" than "Candidate X will not win the election with probability 50%". That intuition comes from the fact that an uninformed prior assigns them all 10% probability, so a claim that any individual one will win feels more specific in some way. But actually the specificity comes from the fact that if you claim 50% probability for one candidate when the uninformed prior was 10%, you must have access to some information about the candidates that allows you to be so confident. This will be properly captured by the log scoring rule; if you really do have such information, then you'll get a better score by claiming 50% probability for the one most likely to win rather than 10% for each.
Ultimately, the way you get information about your calibration is by seeing how well your full probability distribution about the odds of each candidate performs against reality. One will win, nine will lose, and the larger the probability mass you put on the winner, the better you do. Calibration is about seeing how well your beliefs score against reality; if your score depends on which of two logically equivalent phrasings you choose to express the same beliefs, there is some fundamental inconsistency in your scoring rule.
Yes, but only because I don't agree that there was any useful information that could have been obtained in the first place.
I don't understand why there is so much resistance to the idea that stating "X with probability P(X)" also implies "~X with probability 1-P(X)". The point of assigning probabilities to a prediction is that it represents your state of belief. Both statements uniquely specify the same state of belief, so to treat them differently based on which one you wrote down is irrational. Once you accept that these are the same statement, the conclusion in my post is inevitable, the mirror symmetry of the calibration curve becomes obvious, and given that symmetry, all lines must pass through the point (0.5,0.5).
Imagine the following conversation:
A: "I predict with 50% certainty that Trump will not win the nomination".
B: "So, you think there's a 50% chance that he will?"
A: "No, I didn't say that. I said there's a 50% chance that he won't."
B: "But you sort of did say it. You said the logically equivalent thing."
A: "I said the logically equivalent thing, yes, but I said one and I left the other unsaid."
B: "So if I believe there's only a 10% chance Trump will win, is there any doubt that I believe there's a 90% chance he won't?
A: "Of course, nobody would disagree, if you said there's a 10% chance Trump will win, then you also must believe that there's a 90% chance that he won't. Unless you think there's some probability that he both will and will not win, which is absurd."
B: "So if my state of belief that there's a 10% chance of A necessarily implies I also believe a 90% chance of ~A, then what is the difference between stating one or the other?"
A: "Well, everyone agrees that makes sense for 90% and 10% confidence. It's only for 50% confidence that the rules are different and it matters which one you don't say."
B: "What about for 50.000001% and 49.999999%?"
A: "Of course, naturally, that's just like 90% and 10%."
B: "So what's magic about 50%?"
"Candidate X will win the election with 50% probability" also implies the proposition "Candidate X will not win the election with 50% probability". If you propose one, you are automatically proposing both, and one will inevitably turn out true and the other false.
If you want to represent your full probability distribution over 10 candidates, you can still represent it as binary predictions. It will look something like this:
Candidate 1 will win the election: 50% probability
Candidate 2 will win the election: 10% probability
Candidate 3 will win the election: 10% probability
Candidate 4 will win the election: 10% probability
Candidate 5 will win the election: 10% probability
Candidate 6 will win the election: 2% probability
...
Candidate 1 will not win the election: 50% probability
Candidate 2 will not win the election: 90% probability
...
The method described in my post handles this situation perfectly well. All of your 50% predictions will (necessarily) come true 50% of the time, but you rack up a good calibration score if you do well on the rest of the predictions.
Agreed.
I'm really not so sure what a frequentist would think. How would they express "Jeb Bush will not be the top-polling Republican candidate" in the form of a repeated random experiment?
It seems to me more likely that a frequentist would object to applying probabilities to such a statement.
I have updated my post to respond to your concerns, expanding on your lottery example in particular. Let me know if I've adequately addressed them.
I intend to update the article later to include log error. Thanks!
The lottery example though is a perfect reason to be careful about how much importance you place on calibration over accuracy.
Thanks, that is indeed a better scoring rule. I did check first to see if the squared error was proper, and it was (since these are binary predictions), but the log rule makes much more sense. I will update the above later when I get home.
Thanks!
Hi Mark,
Thanks for your well-considered post. Your departure will be a loss for the community, and sorry to see you go.
I also feel that some of the criticism you're posting here might be due to a misunderstanding, mainly regarding the validity of thought experiments, and of reasoning by analogy. I think both of these have a valid place in rational thought, and have generally been used appropriately in the material you're referring to. I'll make an attempt below to elaborate.
Reasoning by analogy, or, the outside view
What you call "reasoning by analogy" is well described in the sequence on the outside view. However, as you say,
The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.
This is exactly the same criticism that Eliezer has of outside-view thinking, detailed in the sequences!
In outside view as a conversation halter:
Of course Robin Hanson has a different idea of what constitutes the reference class and so makes a rather different prediction - a problem I refer to as "reference class tennis"[...] But mostly I would simply decline to reason by analogy, preferring to drop back into causal reasoning in order to make weak, vague predictions.
You're very right that the uncertainty in the AI field is very high. I hope that work is being done to get a few data points and narrow down the uncertainty, but don't think that you're the first to object to an over-reliance on "reasoning by analogy". It's just that when faced with a new problem with no clear reference class, it's very hard to use the outside vew, but unfortunately also hard to trust predictions from a model which has sensitive parameters with high uncertainties.
Thought experiments are a tool of deduction, not evidence
We get instead definitive conclusions drawn from thought experiments only.
This is similar to complaining about people arriving at definitive conclusions drawn from mathematical derivation only.
I want to stress that this is not a problem in most cases, especially not in physics. Physics is a field in which models are very general and held with high confidence, but often hard to use to handle complicated cases. We have a number of "laws" in physics that we have fairly high certainty of; nonetheless, the implications of these laws are not clear, and even if we believe them we may be unsure of whether certain phenomena are permitted by these laws or not. Of course we also do have to test our basic laws, which is why we have CERN and such, especially because we suspect they are incomplete (thanks in part to thought experiments!).
A thought experiment is not data, and you do not use conclusions from thought experiments to update your beliefs as though the thought experiment were producing data. Instead, you use thought experiments to update your knowledge of the predictions of the beliefs you already have. You can't just give an ordinary human the laws of physics written down on a piece of paper and expect them to immediately and fully understand the implications of the truth of those laws, or even to verify that the laws are not contradictory.
Thus, Einstein was able to use thought experiments very profitably to identify that the laws of classical mechanics (as formulated at the time) led to a contradiction with the laws of electrodynamics. No experimental evidence was needed; the thought experiment is a logical inference procedure which identifies one consequence of Maxwell's equations being that light travels at speed 'c' in all reference frames, and shows that to be incompatible with Galilean relativity. A thought experiment, just like a mathematical proof-by-contradiction, can be used to show that certain beliefs are mutually inconsistent and one must be changed or discarded.
Thus, I take issue with this statement:
(thought experiments favored over real world experiments)
Thought experiments are not experiments at all, and cannot even be compared to experiments. They are a powerful tool for exploring theory, and should be compared to other tools of theory such as mathematics. Experiments are a powerful tool for checking your theory, but experiments alone are just data; they won't tell you what your theory predicted, or whether your theory is supported or refuted by the data. Theory is a powerful tool for exploring the spaces of mutually compatible beliefs, but without data you cannot tell whether a theory has relevance to reality or not.
It would make sense to protest that thought experiments are being used instead of math, which some think is a more powerful tool for logical inference. On the other hand, math fails at being accessible to a wide audience, while thought experiments are. But the important thing is that thought experiments are similar to math in their purpose. They are not at all like experiments; don't get their purposes confused!
Within Less Wrong, I have only ever seen thought experiments used for illustrating the consequences of beliefs, not for being taken as evidence. For example, the belief that "humans have self-sabotaging cognitive flaws, and a wide variation of talents" and the belief that "humans are about as intelligent as intelligent things can get" would appear to be mutually incompatible, but it's not entirely obvious and a valid space to explore with thought experiments.
Upvotes and downvotes should be added independent of the post's present score [pollid:950]
Upvoted because I think this is a really good point, which is almost totally missed in the surrounding discussion.
For example, it's interesting to see that a lot of the experiments were directly attempting to measure C: The researcher tries to persuade the child to believe something about A, and then measures their performance. But then that research gets translated in the lay press as demonstrating something about A!
I feel that if emr's post were put as a header to Scott's, the amount of confusion in the rebuttals would be reduced considerably.
Incidentally, I've observed a similarly common difficulty understanding the distinction between derivative orders of a quantity, eg. distinguishing between something "being large" vs. "growing fast", etc. This seems less common among people trained in calculus, but even then, often people confuse these. I see it all the time in the press, and I wonder if there is a similar level-hopping neural circuit at work.
For example, there are three or four orders of differentiation that exist in common discussion of climate change, eg:
- A: Scientists recommend that atmospheric CO2 be kept below 350 ppm.
- B: Canada emits only about half a gigaton of CO2 per year, whereas China emits nearly twenty times that much.
- BB: Canada emits 15.7 tons of CO2 annually per capita, among the highest in the world, whereas China emits less than half of that amount per capita.
- C: China's emissions are among the fastest-growing in the world, up by nearly 500 million tonnes over last year. Canada decreased its emissions by 10 million tonnes over the same period.
- D: The growth in Canadian oil-industry emissions could slow if low prices force the industry to reduce expansion plans.
Et cetera...
Ostensibly what actually matters is A, which is dependent on the fourth integral of what is being discussed in D! People end up having a very hard time keeping these levels distinct, and much confusion and miscommunication ensues.
I wonder -- do you think students of calculus will be better at understanding the levels of indirection in either case?
That sounds beyond terrible. I really wish I could be of more help. I know exactly how awful it is to have a migraine for one hour, but I cannot fathom what it must be like to live with it perpetually.
Well, here is some general Less Wrong-style advice which I can try to offer. The first thing is that since you have been coping with this for so long, maybe you don't have a clear feeling for how much better life would be without this problem. If these migraines are as bad for you as I imagine they are, then I would recommend that you make curing yourself almost your first priority in life, as an instrumental goal for anything else that you care about.
I agree that it is worse than blindness. If I went blind, I would learn to cope and not invest all of my energies into restoring my vision. But if I were you, I would classify curing your migraines as a problem deserving an extraordinary effort as if your life itself were at stake ( http://lesswrong.com/lw/uo/make_an_extraordinary_effort/ ). That means going beyond the easy and obvious solutions that you have already tried (such as medication) and doing something out of the ordinary to succeed.
Treat this as mere speculation, since I'm not up-to-date on the migraine literature anymore... but an example of out-of-the-ordinary solutions, you could try renting a different house for a month, moving to a different city, or even moving to a totally different country for a couple weeks. The thinking being that if there is an environmental trigger, a shotgun approach that changes as many environmental variables at once might solve this. For example, if it turned out you have a sensitivity to something in your house, moving house for a while might work. If it turned out to be air pollution in your city, then moving to a cleaner environment might fix it. Unfortunately, unless the state of migraine knowledge has advanced a lot, I think the space of possible hypotheses is huge. So...
Basically, I'm suggesting that you might want to try something on the scale of a month-long trip to live with Buddhist monks in Nepal, or on a Kibbutz in Israel, or to a fishing village in Newfoundland, or something. Changing at once basically everything about your lifestyle from diet, exercise, environment, sleep schedule, electronic devices, and interpersonal interactions. It's not the kind of solution most people would try, especially since the daily responsibilities of life (work, family, money, etc) always seem to take priority, and nobody has the time to just go and leave for a month. Especially since you have a severe impairment which probably makes all the other things take even more time and effort. But that's the difference between making a desperate effort, and "trying to try" just to satisfy yourself that you've done as much as anyone else would do. If curing your migraines is your top priority in life, as I think it should be right now, then it's worth investing a year of your time.
Anyway, that's the only other thought I have. You should try the easy things first of course (starting with MSG), but before you give up make sure you understand how wide the space of possible solutions might be, and how many different lines of attack might exist that haven't even been thought of yet.
I suffered from severe migraines for most of my life, but they were much more frequent when I was younger -- about two or three times a week. During high school they decreased in frequency a lot, to maybe once a month or once every two months, and now that I am in my late '20s, I only get migraines once or twice a year. Unfortunately I can't give you much rationalist advice; although I discussed migraines with my doctor and we worked together to find a solution, to my understanding it's still not a scientifically well-understood problem. So all I can tell you is what I've found worked for me.
My symptoms were usually a severe throbbing pain around my head somewhere between my eyes and ears, light and sound sensitivity, nausea, and a strange kind of dizziness. Reorienting my head would affect the pain a lot, and so during a migraine I usually end up with my head tilted to one side.
I couldn't find any good way to rid myself of a migraine once it had started, but Advil worked well for dulling the pain, after about 15 minutes. I'd just lie in a dark room until it passed, which sometimess took up to an hour.
Preventative measures worked the best.. I did learn some warning signs (well, feelings that are a bit hard to put into words) that a migraine might be impending, and after I noticed them I would try to remove myself to a quiet, low-stimulation environment, and make sure that I was well-hydrated and calm. That would often help avoid the migraine becoming severe. I found that migraines were always much, much worse when I was dehydrated, which as it happens is an easy condition not to notice.
I don't know exactly what has caused the large reduction in frequency of attacks for me, although I'm thankful for it. It could just be that my body grew and changed and now I don't get them as much. It also could be dietary changes -- I used to eat very different foods when I was a kid than I do now, including a lot of instant ramen noodles and other salty foods, and candy, whereas now I keep a much healthier diet. I cut out the ramen noodles at my doctor's suggestion that MSG might be a trigger, and that was definitely correlated with a major decrease in occurrence frequency, but it wasn't a controlled experiment and I changed a lot of other things too. I wouldn't be surprised if the dietary changes helped, but it's hard to be certain.
Good luck. Migraines are bad, bad, bad and I hope you can get rid of them.
*Edit: By the way, have you been to an optometrist lately? It might be that you are suffering from low-level eye strain and are in need of glasses.
The sentence after the Mere Exposure Effect is introduced does not quite parse. Might want to double check it.
A possible distinction between status and dominance: You are everybody's favourite sidekick. You don't dominate or control the group, nor do you want to, nor do you even voice any opinions about what the group should do. You find the idea of telling other people what to do to be unpleasant, and avoid doing so whenever you can. You would much rather be assigned complex tasks and then follow them through with diligence and pride. Everyone wants you in the group, they genuinely value your contribution, they care about your satisfaction with the project, and want you to be happy and well compensated.
By no means would I consider this role dominant, at least not in terms of controlling other people. (You might indeed be the decisive factor in the success of the group, or the least replaceable member). But it is certainly a high-status role; you are not deferred to but you are respected, and you are not treated as a replaceable cog. The president or boss knows your name, knows your family, and calls you first when something needs to be done.
I think many people aspire to this position and prefer it over a position of dominance.
A low-status person on this scale would be somebody ignored, disrespected, or treated as replaceable and irrelevant. You are unworthy of attention. When it is convenient others pretend you don't exist, and your needs, desires, and goals are ignored.
I think almost everyone desires high status by this measure. It is very different than dominance.
Ah, hmm.... Maybe! If you include the entire history of the atom, then I'm not actually sure. That's a tough question, and a good question =)
That is not at all true; for example, see the inverse problem (http://en.wikipedia.org/wiki/Inverse_problem). Although the atom's position is uniquely determined by the rest of the universe, the inverse is not true: Multiple different states of the universe could correspond to the same position of the atom. And as long as the atom's position does not uniquely identify the rest of the outside universe, there is no way to infer the state of the universe from the state of the atom, no matter how much precision you can measure it with. The reason is that there are many ways that the boundary conditions of a box containing an atom could be arranged, in order to force it to any position, meaning that there is a limited amount that the atom can tell you about its box.
The atom is affected by its local conditions (electromagnetic and gravitational fields, etc), but there are innumerable ways of establishing any particular desired fields locally to the atom.
This causes challenges when, for example, you want to infer the electrical brain activity in a patient based on measurements of electromagnetic fields at the surface. Unfortunately, there are multiple ways that electrical currents could have been arranged in three-dimensional space inside the brain to create the same observed measurements at the surface, so it's not always possible to "invert" the measurements directly without some other knowledge. This isn't a problem of measurement precision; a finer grid of electrodes won't solve it (although it may help rule out some possibilities).
Really good ending chapter. The presence of Hermione's character totally changes the tone of the story, and reading this one, it became really clear how heavily the Sunshine General was missing from the last ~third or so of the story arc. Eliezer writes her very well, and seems to enjoy writing her too.
I thought Hermione was going to cast an Expecto Patronum at the end, with all the bubbling happiness, but declaring friendship works well too.
Irrelevant thought: Lasers aren't needed to test out the strange optics of Harry's office; positioning mirrors in known positions on the ground and viewing them through a telescope from the tower would already give intriguing results.
I hope that somebody (well, Harry) tells Michael MacNair that his father, alone among those summoned, died in combat with Voldemort. It seems sad for him not to know that.
According to Descartes: for any X, P(X exists | X is taking the survey) = 100%, and also that 100% certainty of anything on the part of X is only allowed in this particular case.
Therefore, if X says they are Atheist, and that P(God exists | X is taking the survey) = 100%, then X is God, God is taking the survey, and happens to be an Atheist.
A note about calibration...
A poor showing on some questions, such as "what is the best-selling computer game", doesn't necessarily reflect poor calibration. It might instead simply mean that "the best-selling game is Minecraft" is a surprising fact to this particular community.
For example, I may have never heard of Minecraft, but had a large amount of exposure to evidence that "the best-selling game is Starcraft", or "the best-selling game is Tetris". During the time when I did play computer games, which was before the era of Minecraft, this maybe would have been true (or maybe it would have been a misperception even then). But when I look at the evidence that I have mentally available to recall, all of it might point to Starcraft.
Probability calibration can only be based on prior probabilties and available evidence. If I start with something like a max-entropy prior (assigning all computer games an equal probability of having been the best-selling ones) and then update it based on every time I remember hearing the popularity of game X discussed in the media, then the resulting sharpness of my probability distribution (my certainty) will depend on how much evidence I have, and how sharply it favours game X over others.
If I happened to have obtained my evidence during a time when Starcraft really was the best-selling computer game, my evidence will be sharply peaked around Starcraft, leading me (even as a perfect Bayesian) to conclude that Starcraft is the answer with high probability. Minecraft wouldn't even make runner-up, especially if I haven't heard of it.
When I learn afterward that the answer is Minecraft, that is a "surprising result", because I was confident and wrong. That doesn't necessarily mean I had false confidence or updated poorly, just that I had updated well on misleading evidence. However, we can't have 'evidence' available that says our body of evidence is dubious.
If the whole community is likewise surprised that Minecraft is the right answer, that doesn't necessarily indicate overconfidence in the community... it might instead indicate that our evidence was strongly and systematically biased, perhaps because most of us are not of the Minecraft generation ( not sure if 27 is too old for inecraft or not?).
Similarly, if the world's scholars were all wrong and (somehow), in reality, the Norse God known as the All-Father was not called Odin but rather "Nido", but this fact had been secretly concealed, then learning this fact would surprise everyone (including the authors of this survey). Our calibration results would suddenly look very bad on this question. We would all appear to be grossly overconfident. And yet, there would be no difference in the available evidence we'd had at hand (which all said "Odin" was the right answer).
A tennis ball is a multi-particle system; however, all of the particles are accelerating more or less in unison while the ball free-falls. Nonetheless, it isn't usually considered to be increasing in temperature, because the entropy isn't increasing much as it falls.
Oh good point! And if you don't know the context when performing the translation (perhaps it's an announcement at an all-girls or an all-boys school?), then the translation will be incorrect.
The ambiguity in the original sentence may be impossible to preserve in the translation process, which doesn't mean that translation is impossible, but it does mean that information must be added by the translator to the sentence that wasn't present in the original sentence.
Sometimes I do small contract translation jobs as a side activity, but it's very frustrating when a client sends me snippets of text to be translated without the full context.
Yes..... you may be right, and it is a compelling reason, for example, not to admit terrorists into a country.
I suppose that if a particular individual's admission into the country would depress the entire country by a sufficient amount, then that's a fair reason to keep them out, without worrying about valuing different peoples' utilities unequally.
That's fine. Do you consider yourself a utilitarian? Many people do not.
For that matter, following Illano's line of thought, it's not clear that the amount that poor people would appreciate receiving all of my possessions is greater than the amount of sadness I would suffer from losing everything I own. (Although if I was giving it away out of a feeling of moral inclination to do so, I would presumably be happy with my choice). I'm not sure what George Price was thinking exactly.
Yes, of course. But the net average quality of life is increased overall. Please examine the posts that I'm replying to here, for the context of the point I am making. For convenience I've copied it below:
How many billion people would be better off if allowed to immigrate to GB? Utilitarianism is about counting everyone's utility the same...
You can't fit billions of people in the UK.
If you are entering the argument with a claim that the UK's current inhabitants can be utilitarian and simultaneously weigh their own utility higher than those of other humans, then you should be directing your argument toward buybuydanddavis' post, since ze's the one who said "That weighting factor should be 1 for all". I am merely noting that not being able to fit billions of people in the UK is not a valid counterargument; net utility will still be increased by such a policy no matter what the UK's population carrying capacity is.
You can't fit billions of people in the UK. ( I guess that's not what you meant, but that's what it sounds like)
The gain in quality of life from moving to the UK would gradually diminish as the island became overcrowded, until there was no net utility gain from people moving there anymore. Unrestricted immigration is not the same thing as inviting all seven billion humans to the UK. People will only keep immigrating until the average quality of life is the same in the UK as it is anywhere else; then there will be an equilibrium.
I admit that I found this really disturbing too.
I think that it is intended as an exercise. Put yourself in the mindset of an average 18th or 19th-century individual, and imagine the 21st century as an idealized future. Things seem pretty wonderful; machines do most of the work, medicines cure disease, air travel lets you get anywhere on the planet in a single day.
But then, what?! Women can vote, and run businesses? And legalized gay marriage?!! How shocking and disturbing.
It's almost a given that the future's values will drift apart from ours, although we can't be sure how and in which direction they will go. So something about this idealized future would be likely to seem abhorrent to us, even if normal and natural to the people of that time.
I think -- and this seems to be the part that people don't understand at first -- EY is not suggesting that rape should be legalized, or painting this as his ideal values of the future. EY is saying something about the way that values change over time; the future is bound to embrace some values we find abhorrent, and the way that he can convey that feeling to us is by picking some abhorrent thing that pretty much everybody would agree is bad, and depicting it being normal and acceptable in a future society.
That's the only way that we can experience the feeling of how someone from the past would feel about modern culture.
Aside from the fact that having godly power doesn't necessarily correlate well with an ease of understanding life-forms...
The universe less like a carefully painted mural in which humans and other life forms were mapped out in exquisite and intricate detail, and more like a big empty petri dish set in a warm spot billions of years ago. The underlying specifications for creating the universe seem to be pretty simple mathematically (a few laws and some low-entropy initial conditions). The complex part is just the intricate structures that have arisen over time.
Ah, too bad! I'm just about to move out of Tokyo. I would have loved to participate in a LW gathering here otherwise.
There's a ~20% chance that I will be back in Tokyo next year, for a period of a few years. So you can count me as 1/5th of an 'interested' response
Player 2 observes "not A" as a choice. Doesn't player 2 still need to estimate the relative probabilities that B was chosen vs. that C was chosen?
Of course Player 2 doesn't have access to Player 1's source code, but that's not an excuse to set those probabilities in a completely arbitrary manner. Player 2 has to decide the probability of B in a rational way, given the available (albeit scarce) evidence, which is the payoff matrix and the fact that A was not chosen.
It seems reasonable to imagine a space of strategies which would lead player 1 to not choose A, and assign probabilities to which strategy player 1 is using. Player 1 is probably making a shot for 6 points, meaning they are trying to tempt player 2 into choosing Y. Player 2 has to decide the probability that (Player 1 is using a strategy which results in [probability of B > 0]), in order to make that choice.
How does this work for a binary quantity?
If your experiment tells you that [x > 45] with 99% confidence, you may in certain cases be able to confidently transform that to [x > 60] with 95% confidence.
For example, if your experiment tells you that the mass of the Q particle is 1.5034(42) with 99% confidence, maybe you can say instead that it's 1.50344(2) with 95% confidence.
If your experiment happens to tell you that [particle Q exists] is true with 99% confidence, what kind of transformation can you apply to get 95% confidence instead? Discard some of your evidence? Add noise into your sensor readings?
I think my opinion is the same as yours, but I'm curious about whether anybody else has different answers.
Good! I'm glad to hear an answer like this.
So does that mean that, in your view, a drug that removes consciousness must necessarily be a drug that impairs the ability to process information?
Maybe you're on to something...
Imagine there were drugs that could remove the sensation of consciousness. However, that's all they do. They don't knock you unconscious like an anaesthetic; you still maintain motor functions, memory, sensory, and decision-making capabilities. So you can still drive a car safely, people can still talk to you coherently, and after the drugs wear off you'll remember what things you said and did.
Can anyone explain concretely what the effect and experience of taking such a drug would be?
If so, that might go a long way toward nailing down what the essential part of consciousness is (ie, what people really mean when they claim to be conscious). If not, it might show that consciousness is inseparable from sensory, memory, and/or decision-making functions.
For example, I can imagine an answer like "such a drug is contradictory; if it really took away what I mean by 'consciousness', then by definition I couldn't remember in detail what had happened while it was in effect". Or "If it really took away what I mean by consciousness, then I would act like I were hypnotized; maybe I could talk to people, but it would be in a flat, emotionless, robotic way, and I wouldn't trust myself to drive in that state because I would become careless".
Well, unlike a fundamental theory of physics, we don't have strong reasons to expect that consciousness is indescribable in any more basic terms. I think there's a confusion of levels here... GR is a description of how a 4-dimensional spacetime can function and precisely reproduces our observations of the universe. It doesn't describe how that spacetime was born into existence because that's an answer to a different question than the one Einstein was asking.
In the case of consciousness, there are many things we don't know, such as:
1: Can we rigorously draw a boundary around this concept of "consciousness" in concept-space in a way that captures all the features we think it should have, and still makes logical sense as a compact description
2: Can we use a compact description like that to distinguish empirically between systems that are and are not "conscious"
3: Can we use a theory of consciousness to design a mechanism that will have a conscious subjective experience
It's quite possible that answering 1 will make 2 obvious, and if the answer to 2 is "yes", then it's likely that it will make 3 a matter of engineering. It seems likely that a theory of consciousness will be built on top of the more well-understood knowledge base of computer science, and so it should be describable in basic terms if it's not a completely incoherent concept. And if it is a completely incoherent concept, then we should expect an answer instead from cognitive science to tell us why humans generally seem to feel strongly that consciousness is a coherent concept, even though it actually is not.
Since GR is essentially a description of the behaviour of spacetime, it isn't GR's job to explain why spacetime exists. More generally, it isn't the job of any theory to explain why that theory is true; it is the job of the theory to be true. Nobody expects [theory X] to include a term that describes the probability of the truth of [theory X], so lacking this property does not deduct points.
There may be a deeper theory that will describe the conditions under which spacetime will or will not exist, and give recipes for cooking up spacetimes with various properties. But there isn't necessarily a deeper layer to the onion. At some point, if you keep digging far enough, you'll hit "The Truth Which Describes the Way The Universe Really Is", although it may not be easy to confirm that you've really hit the deepest layer. The only evidence you'll have is that theories that claim to go deeper cease to be falsifiable, and increase in complexity.
If you can find [Theory Y] which explains [Theory X] and generalizes to other results which you can use to confirm it, or which is strictly simpler, now that's a different case. In that case you have the ammunition to say that [Theory X] really is lacking something.
But picking which laws of physics happen to be true is the universe's job, and if the universe uses any logical system of selecting laws of physics, I doubt it will be easy to find out. The only fact we know about the meta-laws governing the laws of universes is that the laws of our universe fit the bill, and it's likely that that is all the evidence we will ever be able to acquire.
I am intrinsically contrarian.
Is this a reason, or a bias?
Climate scientists have never made a public falsifiable prediction.
How would you update your beliefs if you learned that this statement is false?
Oil is cleaner than coal, so if C02 emissions are restricted, the oil industry will probably benefit.
Would you therefore like to offer me the odds on a prediction that, if we investigated the funding sources of various pro- and anti- AGW campaigns and think tanks, we would find that oil companies are predominantly sponsoring pro-AGW think tanks and stronger emissions legislation?
Good, I just wanted to be clear. In my experience that "alarmist" usually does strongly imply that the predictions of danger are unjustified, and that interpretation (which I presume most readers default to) risks changing the intended meaning of your statement that "climate scientists[...] are often quite alarmist."
Now that I re-read your top-level post knowing what you meant, I think I understand much better what you are saying.
By "alarmist", I meant making dire predictions, not sounding panicked when they do
If you don't mind, I would like to probe your usage of this term...
What distinction do you draw between "alarmist" and "alarming"?
If the hypothetical situation is such that the truth really is properly dire, are accurate reports of this dire truth best classified as "alarmist"?
How should you react when, one night in the laboratory, you make an alarming discovery with fairly high confidence? After having them independently verified, would you consider yourself an "alarmist" for reporting your own findings?
Yes and I fully agree with you. I am just being pedantic about this point:
I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
I agree with this philosophy, but my argument is that the following is evidence we do not have:
Due to Snowden and other leakers, we actually know what NSA's cutting-edge strategies involve[...]
Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I wouldn't update my belief about the NSA's supposed advanced technical knowledge based on Snowden.
I agree that it has a low probability for the other reasons you say, though. (And also that people who think setting other peoples' mousetraps on fire is a legitimate tactic might not simultaneously be passionate about designing the perfect mousetrap.)
Sorry for not being clear about the argument I was making.
I do think you're probably right, and I fully agree about the space lasers and their solid diamond heatsinks being categorically different than a crypto wizard who subsists on oatmeal in the Siberian wilderness on pennies of income. So I am somewhat skeptical of CivilianSvendsen's claim.
But, for the sake of completeness, did Snowden leak the entirety of the NSA's secrets? Or just the secret-court-surveillance-conspiracy ones that he felt were violating the constitutional rights of Americans? As far as I can tell (though I haven't followed the story recently), I think Snowden doesn't see himself as a saboteur or a foreign double-agent; he felt that the NSA was acting contrary to what the will of an (informed) American public would be. I don't think he would be so interested in disclosing the NSA's tech secrets, except maybe as leverage to keep himself safe.
That is to say, there could be a sampling bias here. The leaked information about the NSA might always be about their efforts to corrupt the public's crypto because the leakers strongly felt the public had a right to know that was going on. I don't know that anyone would feel quite so strongly about the NSA keeping proprietary some obscure theorem of number theory, and put their neck on the line to leak it.
I don't know much about the NSA, but FWIW, I used to harbour similar ideas about US military technology -- I didn't believe that it could be significantly ahead of commercially available / consumer-grade technology, because if the technological advances had already been discovered by somebody, then the intensity of the competition and the magnitude of the profit motive would lead it to quickly spread into general adoption. So I had figured that, in those areas where there is an obvious distinction between military and commercial grade technology, it would generally be due to legislation handicapping the commercial version (like with the artificial speed, altitude, and accuracy limitations on GPS).
During my time at MIT I learned that this is not always the case, for a variety of reasons, and significantly revised my prior for future assessments of the likelihood that, for any X, "the US military already has technology that can do X", and the likelihood that for any 'recently discovered' Y, "the US military already was aware of Y" (where the US military is shorthand that includes private contractors and national labs).
(One reason, but not the only one, is I learned that the magnitude of the difference between 'what can be done economically' and 'what can be accomplished if cost is no obstacle' is much vaster than I used to think, and that, say, landing the Curiosity rover on Mars is not in the second category).
So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain. Although a lot of the reasons that allow hardware technology to remain military secrets probably don't apply so much to cryptography.
I think that a 'reductive' explanation of quantum mechanics might not be as appealing as it seems to you.
Those fluid mechanics experiments are brilliant, and I'm deeply impressed by them for coming up with them, let alone putting it into practice! However, I don't find it especially convincing as a model of subatomic reality. Just like the case with early 20th-century analog computers, with a little ingenuity it's almost always possible to build a (classical) mechanism that will obey the same math as almost any desired system.
Definitely, to the point that it can replicate all observed features of quantum mechanics, the fluid dynamics model can't be discarded as a hypothesis. But it has a very very large Occam's Razor penalty to pay. In order to explain the same evidence as current QM, it has to postulate a pseudo-classical physics layer underneath, which is actually substantially more complicated than QM itself, which postulates basically just a couple equations and some fields.
Remember that classical mechanics, and most especially fluid dynamics, are themselves derived from the laws of QM acting over billions of particles. The fact that those 'emergent' laws can, in turn, emulate QM does imply that QM could, at heart, resemble the behaviour of a fluid mechanic system... but that requires postulating a new set of fundamental fields and particles, which in turn form the basis of QM, and give exactly the same predictions as the current simple model that assumes QM is fundamental. Being classical is neither a point in its favour nor against it, unless you think that there is a causal reason why the reductive layer below QM should resemble the approximate emergent behaviour of many particles acting together within QM.
If we're going to assume that QM is not fundamental, then there is actually an infinite spectrum of reductive systems that could make up the lower layer. The fluid mechanics model is one that you are highlighting here, but there is no reason to privilege it over any other hypothesis (such as a computer simulation) since they all provide the same predictions (the same ones that quantum mechanics does). The only difference between each hypothesis is the Occam penalty they pay as an explanation.
I agree that, as a general best practice, we should assign a small probability to the hypothesis that QM is not fundamental, and that probability can be divided up among all the possible theories we could invent that would predict the same behaviour. However, to be practical and efficient with my brain matter, I will choose to believe the one theory that has vastly more probability mass, and I don't think that should be put down as bullet swallowing.
Is QM not simple enough for you, that it needs to be reduced further? If so, the reduction had better be much simpler than QM itself.