Posts

Does davidad's uploading moonshot work? 2023-11-03T02:21:51.720Z

Comments

Comment by Anders_Sandberg on How Many LHC Failures Is Too Many? · 2008-09-22T17:19:00.000Z · LW · GW

I did a calculation here:
http://tinyurl.com/3rgjrl
and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).

Comment by Anders_Sandberg on Horrible LHC Inconsistency · 2008-09-22T16:05:18.000Z · LW · GW

I like Roko's suggestion that we should look at how many doomsayers actually predicted a danger (and how early). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).

Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/ Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that forces several iterations of checking and correction (Panko's data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).

At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.

Comment by Anders_Sandberg on LA-602 vs. RHIC Review · 2008-06-24T18:55:00.000Z · LW · GW

A new report (Steven B. Giddings and Michelangelo M. Mangano, Astrophysical implications of hypothetical stable TeV-scale black holes, arXiv:0806.3381 ) does a much better job at dealing with the black hole risk than the old "report" Eliezer rightly slammed. It doesn't rely on Hawking radiation (but has a pretty nice section showing why it is very likely) but instead calculates how well black holes can be captured by planets, white dwarves and neutron stars (based on AFAIK well-understood physics, besides the multidimensional gravity one has to assume in order to get the threat in the first place). The derivation does not assume that Eddington luminosity slows accretion and does a good job at examining how fast black holes can be slowed - it turns out that white dwarves and neutron stars are good at slowing them. This is used to show how dangerously fast planetary accretion rates are incompatible with the observed lifetime of white dwarves and neutron stars.

The best argument for Hawking radiation IMHO is that particle physics is time-reversible, so if there exist particle collisions producing black holes there ought to exist black holes decaying into particles.

Comment by Anders_Sandberg on Congratulations to Paris Hilton · 2007-10-19T01:13:16.000Z · LW · GW

If this is not a hoax or she does a Leary, we will have her around for a long time. Maybe one day she will even grow up. But seriously, I think Eli is right. In a way, given that I consider cryonics likely to be worthwhile, she has demonstrated that she might be more mature than I am.

To get back to the topic of this blog, cryonics and cognitive biases is a fine subject. There is a lot of biases to go around here, on all sides.

Comment by Anders_Sandberg on "Can't Say No" Spending · 2007-10-18T09:03:44.000Z · LW · GW

"If intelligence is an ability to act in the world, if it refer to some external reality, and if this reality is almost infinitely malleable, then intelligence cannot be purely innate or genetic."

This misses the No Free Lunch theorems, which state that there is no learning system that outperforms any other in general. Yes, full human intelligence, AI superintelligence, earthworms and selecting actions at random are just as good. The trick is "in general", since that covers an infinity of patternless possible worlds. Worlds with (to us) learnable and understandable patterns is a minuscule minority.

Clearly intelligence needs input from an external world. But it has been shaped by millions of years of evolution within a particular kind of world, and there is quite a bit of information in our genes about how to make a brain that can process this kind of world. Beings that are born with perfectly general brains will not learn how to deal with the world until it is too late, compared to beings with more specialised brains. This is actually a source of our biases, since the built in biases that reduce learning time may not be perfectly aligned with the real world or the new world we currently inhabit.

Conversely, it should not be strange that there is variation in the genes that enable our brains to form and that this produces different biases, different levels of adaptivity and different "styles" of brains. Just think of trying to set the optimal learning rate, discount rate and exploration rate of reinforcement agents.

I agree with Watson that it would be very surprising if intelligence-related genes were perfectly equally distributed. At the same time there are a lot of traits that are surprisingly equally distributed. At the same time, the interplay between genetics, environment, schooling, nutrition, rich and complex societies etc. is complex and accounts for a lot. We honestly do not understand it and its limits at present.

Comment by Anders_Sandberg on How to Seem (and Be) Deep · 2007-10-16T23:31:00.000Z · LW · GW

People have apparently argued for a 300 to 30,000 years storage limit due to free radicals due to cosmic rays, but the uncertainty is pretty big. Cosmic rays and background radiation are likely not as much a problem as carbon-14 and potassium-40 atoms anyway, not to mention the freezing damage. http://www.cryonics.org/1chapter2.html has a bit of discussion of this. The quick way of estimating the damage is to assume it is time compressed, so that the accumulated yearly dose is given as an acute dose.

Comment by Anders_Sandberg on The Logical Fallacy of Generalization from Fictional Evidence · 2007-10-16T11:38:44.000Z · LW · GW

I think Kaj has a good point. In a current paper I'm discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen's berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don't contain that much of actual relevance to my paper, so I just credit him with the concept and move on.

The example of Metamorphosis of Prime Intellect seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and clearly understandable in the light of a fictional example. But I suspect the problem here is the vividness: it would produce a bias towards increasing risk estimates for that particular problem as a side effect of making the problem itself clearer. Sometimes that might be worth it, especially if the analysis is strong enough to rein in wild risk estimates, but quite often it might be counterproductive.

There is also a variant of absurdity bias in referring to sf: many people tend to regard the whole argument as sf if there is an sf reference in it. I noticed that some listeners to my talk on berserkers did indeed not take the issue of whether there are civilization-killers out there very seriously, while they might be concerned about other "normal" existential risks (and of course, many existential risks are regarded as sf in the first place).

Maybe a rule of thumb is to limit fiction references to where they 1) say something directly relevant, 2) there is a valid reason for crediting them, 3) the biasing effects do not reduce the ability to think rationally about the argument too much.

Comment by Anders_Sandberg on The Logical Fallacy of Generalization from Fictional Evidence · 2007-10-16T09:41:16.000Z · LW · GW

Another reason people overvalue science fiction is the availability bias due to the authors who got things right. Jules Verne had a fairly accurate time for going from the Earth to the Moon, Clarke predicted/invented geostationary satelites, John Brunner predicted computer worms. But of course this leaves out all space pirates using slide rules for astrogation (while their robots serve rum), rays from unknown parts of the electromagnetic spectrum and gravity-shielding cavorite. There is a vast number of quite erroneous predictions.

I have collected a list of sf stories involving cognition enhancement. They are all over the place in terms of plausibility, and I was honestly surprised by how little useful ideas of the impact of enhancement they had. Maybe it is easier to figure out the impact of spaceflight. I think the list might be useful as a list of things we might want to invent and common tropes surrounding enhancement rather than any start for analysis of what might actually happen.

Still, sf might be useful in the same sense that ordinary novels are: creating scenarios and showing more or less possible actions or ways of relate to events. There are a few studies showing that reading ordinary novels improves empathy, and perhaps sf might improve "future empathy", our ability to consider situations far away from our here-and-now situation.

Comment by Anders_Sandberg on How to Seem (and Be) Deep · 2007-10-15T11:09:52.000Z · LW · GW

In think the "death gives meaning to life" meme is a great example of "standard wisdom". It is apparently paradoxical (right form to be "deep"), it provides a comfortable consolation for a nasty situation. But I have seldom seen any deep defense for it in the bioethical literature. Even people who strongly support it and ought to work very hard to demonstrate to fellow philosophers that it is a true statement seem to be content to just rattle it off as self-evident (or that people not feeling it in their guts are simply superficial).

Being a hopeless empiricist I would like to check whether people today feel life being less meaningful than a century ago, and whether people in countries with short life expectancy feel more meaning than in countries with none. I'm pretty certain the later is not true, and the first looks iffy (hard to check, and lots of confounders like changed social and cultural values, though). I did some statistics on the current state, http://www.aleph.se/andart/archives/2006/12/a_long_and_happy_life.html and found no link between longer life and ennui, at least on a national level.

Comment by Anders_Sandberg on How to Seem (and Be) Deep · 2007-10-14T20:18:27.000Z · LW · GW

I have played with the idea of writing a "wisdom generator" program for a long time. A lot of "wise" statements seem to follow a small set of formulaic rules, and it would not be too hard to make a program that randomly generated wise sayings. A typical rule is to create a paradox ("Seek freedom and become captive of your desires. Seek discipline and find your liberty") or just use a nice chiasm or reversal ("The heart of a fool is in his mouth, but the mouth of the wise man is in his heart"). This seems to fit in with your theory: the structure given by the form is enough to trigger recognition that a wise saying will now arrive. If the conclusion is weird or unfamiliar, so much the better.

Currently reading Raymond Smullyan's The Tao is Silent, and I'm struck by how much less wise taoism seems when it is clearly explained.

Comment by Anders_Sandberg on Original Seeing · 2007-10-14T10:15:18.000Z · LW · GW

There is much to be said for looking at the super-specific. All the interesting complexity is found in the specific cases, while the whole often has less complexity (i.e. the algorithmic complexity of a list of the integers is much smaller than the algorithmic complexity of most large integers). While we might be trying to find good compressed descriptions of the whole, if we do not see how specific cases can be compressed and how they relate to each other we do not have much of a starting point, given that the whole usually overwhelms our limited working memories.

Staring at walls is underrated. But I tend to get distracted from my main project by all the interesting details in the walls.

Comment by Anders_Sandberg on Priming and Contamination · 2007-10-10T13:59:24.000Z · LW · GW

It appears that priming can be reduced by placing words into a context: priming for words previously seen in a text (or even a nonsense jumble) is weaker than when seen individually.

Comment by Anders_Sandberg on Recommended Rationalist Reading · 2007-10-02T00:42:46.000Z · LW · GW

I constantly buy textbooks and use them as bedtime reading. A wonderful way to pick up the fundamentals (or at least a superficial familiarity) with many subjects. However, just reading any textbook is unlikely to actually give a great insight into any field. Doing exercises, and in particular having a teacher or mentor point out what is important, is necessary for actually getting anywhere.

To add at least some thread-relevant material, I'd like to recommend Eliezer's web page "An Intuitive Explanation of Bayesian Reasoning" at http://yudkowsky.net/bayes/bayes.html

I'm reading Piattelli Palmarini's "Inevitable Illusions" right now, but I'm not that impressed so far. Most of the contents seem to be familiar from this list.

Comment by Anders_Sandberg on 9/26 is Petrov Day · 2007-09-29T20:54:15.000Z · LW · GW

In my opinion a full scale thermonuclear war would likely neither have wiped out humanity (I'm reading the original nuclear winter papers as well as their criticisms right now) nor wiped out civilization. It would have been terribly bad for both though. I did a small fictional writeup of such a scenario for a roleplaying game, http://www.nada.kth.se/~asa/Game/Fukuyama/bigd.html based in turn on the information in "The Effects of Nuclear War" (OTA 1979). That scenario may have been too optimistic, but it is hard to tell. It seems that much would depend on exact timing and level of forewarning. But even in the most optimistic scenario the repercussions on human progress would have been severe since human capital is disproportionally concentrated in cities that are likely to be devastated. This can in turn make other threats to human flourishing more serious. For example, in my scenario AIDS is likely to become a far more devastating epidemic than in our world since the rate of research into it has been much reduced and the seriousness of the epidemic is overshadowed by war-related conditions.

Comment by Anders_Sandberg on Einstein's Arrogance · 2007-09-25T10:44:15.000Z · LW · GW

I agree with Tom that there isn't that much room to change the field equations once you have decided on the Riemannian tensor framework: gravity cannot be expressed as first-order differential equations and still fit with observation, while number of objects to build a set of second-order equations is very limited. The equations are the simplest possibility (with the cosmological constant as a slight uglification, but it is just a constant of integration).

But selecting the tensor framework, that is of course where all the bits had to go. It is not an obvious choice at all.

It is interesting to note that Einstein's last paper, "On the relativistic theory of the non-symmetric field" includes a discussion of the "strength" of different theories in terms of how many undetermined degrees of freedom they have. http://books.google.com/books?id=tB9Roi3YnAgC&pg=PA131&lpg=PA131&dq=%22relativistic+theory+of+the+non+symmetric+field%22&source=web&ots=EkMv5tudsI&sig=lkTQE94Ay1h2-qS0mcbGT3xa22M If I recall right, he finds his own theory to be rather flabby.

Comment by Anders_Sandberg on How Much Evidence Does It Take? · 2007-09-25T10:23:39.000Z · LW · GW

Yes, publication bias matters. But it also applies to the p<0.001 experiment - if we have just a single publication, should we believe that the effect is true and just one group has done the experiment, or that the effect is false and publication bias has prevented the publication of the negative results? If we had a few experiments (even with different results) it would be easier to estimate this than in the one published experiment case.

Comment by Anders_Sandberg on How Much Evidence Does It Take? · 2007-09-24T11:40:02.000Z · LW · GW

This also shows why independently replicated scientific experiments (more independent boxes) are more important than experiments with high p-values (boxes with better likeliehood ratios).

Comment by Anders_Sandberg on Human Evil and Muddled Thinking · 2007-09-14T20:48:23.000Z · LW · GW

While Eliezer and I may be approaching the topic differently, I think we have very much the same aim. My approach will however never produce anything worthy to go into anybody's quote file.

Comment by Anders_Sandberg on Human Evil and Muddled Thinking · 2007-09-14T19:41:26.000Z · LW · GW

David Brin has a nice analysis in his book The Transparent Society of what makes open societies work so well (no doubt distilled from others). Essentially it is the freedom to criticize and hold accountable that keeps powerful institutions honest and effective. While most people do not care or dare enough there are enough "antibodies" in a healthy open society to maintain it, even when the "antibodies" themselves may not always be entirely sane (there is a kind of social "peer review" going on here among the criticisms).

Muddled thinking affects this process in several ways. It weakens the ability to perform and react to criticism, and may contribute to reducing the signal-to-noise ratio among whistleblowers by reducing the social "peer review". This is how muddled thinking can promote the loss of openness, democracy and accountability, in the long run leading to non-accountable leaders that have little valid feedback or can just ignore it.

But are biases the main source of muddled thinking? I think muddle is the sum of many different factors: biases, lack of knowledge, communications problems etc. In any situation one or a few factors are the most serious causes of muddle, but they may differ between issues - the biases we have discussed relating to new technology are different from the biases in conspiracy theory or everyday political behavior. To reduce muddle in a situation we ought to reduce the main muddling component(s), but that may be very different in different situations. Sometimes biases are the main problem, sometimes it might just be lack of communication ability. It might be more cost-effective giving people in a developing country camera cellphones than teaching them about availability biases - while in another country the reverse may be true. But clearly overcoming biases is a relevant component in attacking many forms of societally dangerous muddle.

Comment by Anders_Sandberg on Applause Lights · 2007-09-11T22:44:50.000Z · LW · GW

David's comment that we shouldn't ignore people with little political power is a bit problematic. People who are not ignored in a political process have by definition some political power; whoever is ignored lacks power. So the meaning becomes "people who are ignored are ignored all the time". The only way to handle it is to never ignore anybody on anything. So please tell me your views on whether Solna muncipality in Sweden should spend more money on the stairs above the station, or a traffic light - otherwise the decision will not be fully democratic.

I wonder if the sensitivity for applause lights is different in different cultures. When I lectured in Madrid I found mine and several friend's speeches fall relatively flat, despite being our normally successful "standard speeches". But a few others got roaring responses at the applause lights - we were simply not turning them on brighly enough. The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights.

Hmm, was my use of "bias" above just an applause light for Overcoming Bias?

Comment by Anders_Sandberg on Stranger Than History · 2007-09-02T16:38:28.000Z · LW · GW

I get unsolicited email offering to genetically modify rats to my specifications.

I guess this is evidence that we live in a sf novel. Thanks to spam the world's most powerful supercomputer cluster is now run by criminals: http://blog.washingtonpost.com/securityfix/2007/08/storm_worm_dwarfs_worlds_top_s_1.html Maybe it is by Vernor Vinge. Although the spam about buying Canadian steel in bulk (with extra alloys thrown in if I buy more than 150 tons) might on the other hand indicate that it is an Ayn Rand novel.

This whole issue seems to be linked to the question of how predictable the future is. Given that we get blindsided by fairly big trends the problem might not be lack of information nor the chaotic nature of the world, but just that we are bad at ignoring historical clutter. Spam is an obvious and logical result of an email system based on free email and a certain fraction of potential customers for whatever you sell. It ought to have been predictable in the earliest 90's when the non-academic Net was spreading. But at the time even making predictions about the economics of email would have been an apparently unrewarding activity, so it was ignored in favor of newsgroup management.

Maybe the strangeness of the future is just a side effect of limited attention rather than limited intelligence or prediction ability. The strangeness of the past is similarly caused by limited attention to historical facts (i.e. rational ignorance, who cares to understand the victorian moral system?), making actual historical events look odd to us (Archduke Franz Ferdinand insisted on being sewn into his clothes for a crease-free effect, which contributed to him dying and triggering WWI).

Comment by Anders_Sandberg on Science as Attire · 2007-08-23T21:38:20.000Z · LW · GW

I have noticed that since using the word "progress" has become unseemly, many use "evolution" in its stead. Quite often in the sense of "incremental change", sometimes in the slightly biology-analogous sense of "the effect of broad trial and error learning" - but hiding the teleological assumption progress was at least open about.

It has been scientifically proven that people use science attire to make their views sound more plausible :-) Throw in some neuroscience, statistics or a claim by a Ph.D. in anything and you show that you are credible. And the worst thing is that it seems to work fairly often. At the price, of course, that increasingly manipulation-savy media consumer start to suspect a Sinister Conspiracy behind every scientific claim. "Gravitational slingshots - who benefits?" "Who is really behind the stem cells?"

But this attire-wearning is likely nothing new. Tartuffe wore the attire of a pious person to manipulate. It might be more problematic from our standpoint that it is currently a largely epistemological profession/activity that is being used as high-status attire to hide bias and bad epistemology. Having people dress up in moral attire might have been bad for morality and people involved in the moral business, but it didn't hurt truth-seeking and bias-overcoming directly.

Comment by Anders_Sandberg on Open Thread · 2007-07-01T22:08:08.000Z · LW · GW

That would be a utilitarian legal system, trying to maximize utility/happiness or minimixing pain/harm. I'm not an expert on this field, but there is a big literature of comments and criticisms of utilitarianism of course. Whether that is evidence enough that it is a bad idea is harder to say. Clearly it would not be feasible to implement something like this in western democratic countries today, both because on the emphasis of human rights but also (and this is probably more strong) many people have moral intuitions that it is wrong to act like this.

That of course leads into the whole validity of moral intuition issue, which some of my Oxford colleauages are far better placed to explain (mostly because they are real ethicists unlike me). But basically consequentialists like Peter Singer suspect that moral intuitions are irrational, while moderates like Steve Clarke argue that while they can contain relevant information (but we better rationally untangle what it is) and many conservatives regard them as a primary source of moral sentiment we really shouldn't mess with. I guess this is where we get back to overcoming bias again: many moral intuitions look a lot like biases, but whether they are bad biases we ought to get rid of or not is tricky to tell.

My personal view is that this kind of drafting indeed is wrong, and one can argue it both from a Kantian perspective (not using other people as tools), a rights perspective (the right to life and liberty trumps the social good; the situation is not comparable with the possibly valid restrictions during epidemics since my continued good health does not hurt the rights of anybody else) and a risk perspective (the government might with equal justification extend this drafting to other areas like moving people to undesirable regions or more dangerous experiments, and there might be a serious risk of public choice bureaucracy and corruption). The ease I had to bring up a long list counterarguments of course shows my individualist biases. But it is probably easier to get people to join this kind of trials voluntarily simply by telling them it is heroic, for science/society/progress/children - or just pay them well.

I'm personally interested in how we could do the opposite: spontaneous, voluntary and informal epidemology that uses modern information technology to gather data on a variety of things (habits, eating, drugs taken) and then compiles it into databases that enable datamining. A kind of wikiempidemology or flickrepidemology, so to say. Such data would be far messier and harder to interpret than nice clean studies run by proper scientists, but with good enough automatic data acquisition and enough people valuable information ought to be gathered anyway. However, we need to figure out how to handle the many biases that will creep into this kind of experiment. Another job for Overcoming Bias!

Comment by Anders_Sandberg on The Scales of Justice, the Notebook of Rationality · 2007-03-13T16:50:00.000Z · LW · GW

This two-side bias appears to fit in nicely with the neuroscience of decisionmaking where anticipatory affect appears to be weighed together to decide wheter an action or option is "good enough" to act on. For example, in http://sds.hss.cmu.edu/media/pdfs/Loewenstein/knutsonetal_NeuralPredictors.pdf there seems to be an integration of positive reward in the nucleus accumbens linked to the value of the product and negative affect related to the price in the insula, while and medial prefrontal cortex apparently tracks the difference between them.

There is definitely room for a more complex decision system based on this kind of anticipatory emotional integration, since there might be more emotions than just good/bad - maybe some aspects of a choice could trigger curiosity (resulting in further information gathering), aggression (perhaps when the potential loss becomes very high and personal) or qualitative tradeoffs between different emotions. And the prefrontal cortex could jump between considering different options and check if any gains enough support to be acted upon, returning to cycle if none seem to get quite the clearcut support it ought to.

This makes a lot of sense from a neuroscience perspective, but as an approximation to rationality it is of course a total kludge.