Comment by Arenamontanus on Multitudinous outside views · 2020-08-28T07:14:25.952Z · LW · GW

It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.

The problem is that the set of potential reference classes appears to be badly defined. The Tesla case potentially involves all possible subsets of stocks (2^N) over all possible time intervals (2^NT), but as the dictator case shows there is also potentially an unbounded set of other facts that might be included in selecting the reference classes. That means that we should be suspicious about having well-formed priors over the reference class set.

When I have some sensible reference classes pop up in my mind and I select from them I am doing naturalistic decision making where past experience gates availability. So while I should weigh their results together, I should be aware that they are biased in this way. I should broaden my model uncertainty for the weighing accordingly. But how much I broaden it depends on how large I allow the considerable set of potential reference classes to be, a separate meta-prior.

Comment by Arenamontanus on Baking is Not a Ritual · 2020-05-27T19:17:21.153Z · LW · GW

I have been baking for a long time, but it took a surprisingly long while to get to this practical "not a ritual" stage. My problem was that I approached it as an academic subject: an expert tells you what you need to know when you ask, and then you try it. But the people around me knew how to bake in a practical, non-theoretical sense. So while my mother would immediately tell me how to fix a too runny batter and the importance of quickly working a pie dough, she could not explain why that worked in terms that I could understand. Much frustration ensued on both sides.

A while ago I came across Harold McGee's "On Food and Cooking" and Jeff Potter's "Cooking for Geeks". These books explained what was going on in a format that made sense to me - starch gelation, protein denaturation, Maillard reactions, and so on - *and* linked it to the language of the kitchen. Suddenly I had freedom to experiment and observe but with the help of a framework of explicit chemistry and physics that helped me organise the observations. There has been a marked improvement in my results (although mother now finds me unbearably weird in the kitchen). It is also fun to share these insights:

The lesson of my experience is that sometimes it is important to seek out people who can explain and bootstrap your knowledge by speaking your "language", even if they are not the conveniently close and friendly people around you. When you get non-working explanations they usually do not explain much, and hence just become ritual rules. Figuring out *why* explanations do not work for you is the first step, but then one needs to look around for sources of the right kind of explanations (which in my case took far longer). Of course, if you are not as theoretical explanation-dependent as I am but more of the practical, empirical bent, you can sidestep this issue to a large extent.

Comment by Arenamontanus on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-11T23:55:27.519Z · LW · GW

Awesome find! I really like the paper.

I had been looking at Fisher information myself during the weekend, noting that it might be a way of estimating uncertainty in the estimation using the Cramer-Rao bound (but quickly finding that the algebra got the better of me; it *might* be analytically solvable, but messy work).

Comment by Arenamontanus on Assessing Kurzweil predictions about 2019: the results · 2020-05-06T16:19:03.606Z · LW · GW

I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.

No clear patterns when I plotted the predictions in PCA-space: (In this plot colour denotes mean assessor view of correctness, with red being incorrect, and size the standard deviation of assessor views, with large corresponding to more agreement). Some higher order components may correspond to particular correlated batches of questions like the VR ones.

(Or maybe I used the Matlab PCA routine wrong).

Comment by Arenamontanus on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-06T09:54:22.296Z · LW · GW

Another nice example of how this is a known result but not presented in the academic literature:

The fundamental problem is not even distinguishing exponential from logistic: even if you *know* it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point. As pointed out in the related twitter thread, you gain little information about the latter two in the early phase and only information about the first two in the mid phase: it is the sequential nature of the forecasting that is making this problem.

I find it odd that this does not have a classic paper. There are *lots* of Bass curves used in technology adoption studies, and serious business people are interested in using them to forecast - somebody ought to have told them they will get disappointed. It seems to be a result of the kind that everybody who knows the field would know but rarely mention since it is so obvious.

Comment by Arenamontanus on Solar system colonisation might not be driven by economics · 2020-04-22T13:55:39.702Z · LW · GW

I think the argument can be reformulated like this: space has very large absolute amounts of some resources - matter, energy, distance (distance is a kind of resource useful for isolation/safety). The average density of these resources is very low (solar in space is within an order of magnitude of solar on Earth) and for matter it is often low-grade (Earth's geophysics has created convenient ores). Hence matter and energy collection will only be profitable if (1) access gets cheap, (2) one can use automated collection with a very low marginal cost - plausibly robotic automation. (2) implies that a lot of material demands on Earth can be fulfilled that way too on Earth, making the only reason to go to space that one can get very large absolute amounts of stuff. That is fairly different from most material economics on Earth.

Typical ways of getting around this is either claiming special resources, like the Helium-3 lunar mining proposals (extremely doubtful; He3 fusion requires you to have solved lower energy fusion, which has plentiful fuels), or special services (zero gravity manufacturing, military, etc). I have not yet seen any convincing special resource, and while nice niche services may exist (obviously comms, monitoring and research; tourism? high quality fibre optics? military use?) they seem to be niche and near-Earth - not enough to motivate settling the place.

So I end up roughly with Stuart: the main reasons to actually settle space would be non-economic. That leads to another interesting question: do we have good data or theory for how often non-economic settlement occurs and works?

I think one interesting case study is Polynesian island settlement. The cost of exploratory vessels were a few percent of the local economy, but the actual settlement effort may have been somewhat costly (especially in people). Yet this may have reduced resource scarcity and social conflict: it was not so much an investment in getting another island as getting more of an island to oneself (and avoiding that annoying neighbour).

Comment by Arenamontanus on Subscripting Typographic Convention For Citations/Dates/Sources/Evidentials: A Proposal · 2020-01-08T23:13:04.186Z · LW · GW

Overall, typographic innovations like all typography are better the less they stand out yet do their work. At least in somewhat academic text with references and notation subscripting appears to blend right in. I suspect the strength of the proposal is that one can flexibly apply it for readers and tone: sometimes it makes sense to say "I~2020~ thought", sometimes "I thought in 2020".

I am seriously planning to use it for inflation adjustment in my book, and may (publisher and test-readers willing) apply it more broadly in the text.

Comment by Arenamontanus on Space colonization: what can we definitely do and how do we know that? · 2019-05-15T07:09:28.702Z · LW · GW

Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go), and (2) we also handwave the retro-rockets (it is hard to scale down nuclear rockets; I think a detachable laser retro-rocket is better now). I am less concerned about planetary disassembly and building destination infrastructure: this is standard extrapolation of automation, robotics and APM.

However, our paper mostly deals with sending a civilization's seeds everywhere, it does not deal with near term space settlement. That requires a slightly different intellectual approach.

What I am doing in my book is trying to look at a "minimum viable product" - not a nice project worth doing (a la O'Neill/Bezos) but the crudest approach that can show a lower bound. Basically, we know humans can survive for years on something like the ISS. If we can show that an ISS-like system can (1) produce food and other necessities for life, (2) allow crew to mine space resources, (3) turn them into more habitat and life support material, (4) crew can thrive well enough to reproduce, and (5) this system can build more copies of itself with new crew at a faster rate than the system fails - then we have a pretty firm proof of space settlement feasibility. I suspect (1) is close to demonstration, (2) and (3) needs more work, (4) is likely a long term question that must be tested empirically, and (5) will be hard to strictly prove at present but can be made fairly plausible.

If the above minimal system is doable (and I am pretty firmly convinced it is - the hairy engineering problems are just messy engineering, rather than pushing against any limits of physics) then we can settle the solar system. Interstellar settlement requires either self-sufficient habitats that can last very long (and perhaps spread by hopping from Oort-object to Oort-object), AI-run mini-probes as in our paper, or extremely large amounts of energy for fast transport (I suspect having a Dyson sphere is a good start).

Comment by Arenamontanus on Hidden universal expansion: stopping runaways · 2017-06-08T15:50:43.168Z · LW · GW

I have not seen any papers about it, but did look around a bit while writing the paper.

However, a colleague and me analysed laser acceleration and it looks even better. Especially since one can do non-rigid lens systems to enable longer boosting. We developed the idea a fair bit but have not written it up yet.

I would suspect laser is the way to go.

Comment by Arenamontanus on Regulatory lags for New Technology [2013 notes] · 2017-06-01T20:41:00.326Z · LW · GW

Another domain may be aviation. In the US, from the Wright brothers in 1903 to the Air Commerce Act 1926 it took 23 years.

Wikipedia: "In the early years of the 20th century aviation in America was not regulated. There were frequent accidents, during the pre-war exhibition era (1910–16) and especially during the barnstorming decade of the 1920s. Many aviation leaders of the time believed that federal regulation was necessary to give the public confidence in the safety of air transportation. Opponents of this view included those who distrusted government interference or wished to leave any such regulation to state authorities. Barnstorming accidents that led to such regulations during this period is accurately depicted in the 1975 film The Great Waldo Pepper.

At the urging of aviation industry leaders, who believed the airplane could not reach its full commercial potential without federal action to improve and maintain safety standards, President Calvin Coolidge appointed a board to investigate the issue. The board's report favored federal safety regulation. To that end, the Air Commerce Act became law on May 20, 1926."

The UK introduced regulation in 1920 and the Soviet Union in 1921. So a lag of 17-23 years seems to be a decent estimate here.

Comment by Arenamontanus on Hidden universal expansion: stopping runaways · 2017-05-11T10:48:38.314Z · LW · GW

S. Jay Olson's work on expanding civilizations is very relevant here, e.g. and That work suggests that even non-hidden civilizations will be fairly close to their light front.

Now, the METI application: if this scenario is true, then sending messages so that the expanding civilization notices us might be risky if they can quieten down and silently englobe or surprise us. (Surprise is likely more effective than englobement, since spamming the sky with quiet relativistic probes is hard to stop)

When does this matter? If we happen to be far away from the civilization, then they will notice the message late and we could have done all sorts of things in the meantime - escape, becoming an equivalently powerful civilization, gone extinct etc. We would have done the same even if they had not been there. So there is no change.

If they are already here there is only an effect if they react to the attempt at messaging ("Only talk to civs that want to talk to you"/"Only wipe out civs that might pollute the ether with deliberate messages"), so there is no change.

[ In fact, a civilization that deliberately tries to conceal itself may be particularly concerned with signalling civilizations since they might act as beacons: if they get cut off, that might tell other listeners where something is going on. Keeping them signalling even when well inside their englobement might be good camouflage. Until they are discreetly replaced with decoy young civilizations...]

So the only case that leads to nontrivial effects is the nearby but not here case. Friendly civilizations react by doing something friendly, but that would have happened anyway when they came here. Unfriendly civilizations use the information to be more efficiently unfriendly (like quiet englobement). The main thing that changes is that the opportunity for running away (or other reactive responses) decreases.

So the overall utility change is U(METI)-U(no METI) = -Pr[nearby but not here] * U(running away).

Now, Pr[nearby but not here] seems to be small for rapidly expanding civilizations. If they spread at speed v, then it is (v/c)^3. For v=0.9c, it is only 27.5%, and for 0.99c it is 2.97%. So it is a small net negative, assuming U(running away) is positive.

All of the above is conditioned on that interstellar/intergalactic expansion is doable. If it isn't, we get informational game theory instead.

Comment by Arenamontanus on A toy model of the control problem · 2015-09-16T15:10:31.838Z · LW · GW

It would be neat to actually make an implementation of this to show sceptics. It seems to be within the reach of a MSc project or so. The hard part is representing 2-5.

Comment by Arenamontanus on Top 9+2 myths about AI risk · 2015-07-01T09:17:49.350Z · LW · GW

I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.

The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.

Comment by Arenamontanus on Top 9+2 myths about AI risk · 2015-07-01T09:10:19.056Z · LW · GW

I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.

As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.

The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.

Comment by Arenamontanus on Top 9+2 myths about AI risk · 2015-06-30T08:14:43.076Z · LW · GW

I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:

(1) Bioethics can work in a "prophetic" and a "regulatory" mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelines, point out problems, and suggest reforms, but their purpose is generally not to rethink these practices from the ground-up or to question the wisdom of the whole enterprise. As the debate about the role of speculative bioethics has shown, mixing the modes can be problematic. (Guyer and Moreno 2004) really takes bioethics to task for using science fiction instead of science to motivate arguments: they point out that this can actually be good if one does it inside the prophetic mode, but a lot of bioethics (like the President's Council on Bioethics at the time) cannot decide what kind of consideration it is.

(2) Is it possible to find out things about the world by making stuff up? (Elgin 2014) argues fictions and thought experiments do exemplify patterns or properties that they share with phenomena in the real world, and hence we can learn something about the realized world from considering fictional worlds (i.e. there is a homeomorphism between them in some domain). It does require the fiction to be imaginative but not lawless: not every fiction or thought experiment has value in telling us something the real or moral world. This is of course why just picking a good or famous piece of fiction as a source of ideas is risky: it was selected not for how it reflects patterns in the real world, but for other reasons.

Considering Eliezer's levels of intelligence in fictional characters is a nice illustration of this: level 1 intelligence characters show some patterns (being goal directed agents) that matter, and level 3 characters actually give examples of rational skilled cognition.

(3) Putting this together, if you want to use fiction in your argument, the argument better be in the more prophetic, open-ended mode (e.g. arguing that there is AI risks of various kind, what values are at stake etc.) and the fiction needs to have pretty not only high standards of not just internal consistency but actual mappability to the real world. If the discussion is on the more regulatory side (e.g. thinking of actual safeguards, risk assessment, institutional strategies) then fiction is unlikely to be helpful, and very likely (due to good story bias, easily inserted political agendas or different interpretations of worldview) introducing biasing or noise elements.

There are of course some exceptions. Hannu Rajaniemi provides a neat technical trick to the AI boxing problem in the second novel of his Quantum Thief trilogy (turn a classical computation into a quantum one that will decohere if it interacts with the outside world). But the fictions most people mention in AI safety discussions are unlikely to be helpful - mostly because very few stories succeed with point (2) (and if they are well written, they hide this fact convincingly!)

Comment by Arenamontanus on No peace in our time? · 2015-05-26T15:23:03.112Z · LW · GW

Well, 70 years of 1/37 risk still has 13% chance of showing zero wars. Could happen. (Since we are talking about smaller ones rather than WWIII anthropics doesn't distort the probabilities measurably.)

One could buy a Pinker improvement scenario and yet be concerned about a heavy tail due to nuclear or bio warfare of existential importance. The median cases might decline and the rate of events go down, yet the tail get nastier.

Comment by Arenamontanus on Anthropic signature: strange anti-correlations · 2014-10-23T00:50:01.113Z · LW · GW

This is incidentally another way of explaining the effect. Consider the standard diagram of the joint probability density and how it relates to correlation. Now take a bite out of the upper right corner of big X and big Y events: unless the joint density started in a really strange shape this will tend to make the correlation negative.

Comment by Arenamontanus on Anthropic signature: strange anti-correlations · 2014-10-21T23:53:43.899Z · LW · GW

It is pretty cute. I did a few Matlab runs with power-law distributed hazards, and the effect holds up well:

Comment by Arenamontanus on Anthropic signature: strange anti-correlations · 2014-10-21T19:44:00.570Z · LW · GW

Neat. The minimal example would be if each risk had 50% chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk). If the chance of no disaster happening is N/(N+2), then the correlation will be -1/(N+1).

It is interesting to note that many insurance copula methods are used to make size-dependent correlations, but these are nearly always of the type of stronger positive correlations in the tail. This suggests - unsurprisingly - that insurance does not encounter much anthropic risk.

Comment by Arenamontanus on How to write an academic paper, according to me · 2014-10-15T14:57:48.066Z · LW · GW

In some journals there is a text box with up to four take home message sentences summarizing what the paper gives us. It is even easier to skim than the abstract, and typically stated in easy (for the discipline) language. I quite like it, although one should recognize that many papers have official conclusions that are a bit at variance with the actual content (or just a biased glass half-full/half-empty interpretation).

Comment by Arenamontanus on How to write an academic paper, according to me · 2014-10-15T13:02:21.254Z · LW · GW

The standard formula you are typically taught in science is IMRaD: Introduction, Methods, Results, and Discussion. This of course mainly works for papers that are experimental, but I have always found it a useful zeroth iteration for structure when writing reviews and more philosophical papers: (1) explain what it is about, why it is important, and what others have done. (2) explain how the problem is or can be studied/solved. (3) explain what this tells us. (4) explain what this new knowledge means in the large, the limitations of what we have done and learned, as well as where we ought to go next.

Experienced academics also scan the reference section to see who is cited. This is a surface level analysis of whether the author has done their homework, and where in the literature the paper is situated. It is a crude trick, but fairly effective in saving time. It also leads to a whole host of biases, of course.

Different disciplines work in different ways. In medicine everybody loves to overcite ("The brain [1] is an organ commonly found in the head [2,3], believed to be important for cognition [4-18,23].") Computer science is lighter on citations and more forgiving of self-cites (the typical paper cites Babbage/Turing, a competing algorithm, and two tech reports and a conference poster by the author about the earlier version of the algorithm). Philosophy tends to either be very low on citations (when dealing with ideas), or have nitpicky page and paragraph citations (when dealing with what someone really argued).

Comment by Arenamontanus on Mini advent calendar of Xrisks: nuclear war · 2012-12-08T15:58:43.802Z · LW · GW

Actually, when I did my calculations my appreciation of Szilard increased. He was playing a very clever game.

Basically, in order to make a cobalt bomb you need 50 tons of neutrons absorbed into cobalt. The only way of doing that requires a humongous hydrogen bomb. Note when Szilard did his talk: before the official announcement of the hydrogen bomb. The people who could point out the problem with the design would be revealing quite sensitive nuclear secrets if they said anything - the neutron yield of hydrogen bombs was very closely guarded, and was only eventually reverse-engineered by studies of fallout isotopes (to the great annoyance of the US, apparently).

Szilard knew that 1) he was not revealing anything secret, 2) criticising his idea required revealing secrets, and 3) the bomb was impractical, so even if somebody tried they would not get a superweapon thanks to his speech.

I think cobalt bombs can be done; but you need an Orion drive to launch them into the stratosphere. The fallout will not be even, leaving significant gaps. And due to rapid gamma absorption in sea water the oceans will be semi-safe. Just wash your fishing boat so fallout does not build up, and you have a good chance of survival.

Basically, if you want to cause an xrisk by poisoning the biosphere, you need to focus on breaking a key link rather than generic poisoning. Nukes for deliberate nuclear winter or weapons that poison the oxygen-production of the oceans are likely more effective than any fallout-bomb.

Comment by Arenamontanus on The lessons of a world without Hitler · 2012-01-16T22:35:53.841Z · LW · GW

Given that overconfidence is one of the big causes of bad policy, maybe a world without Hitler would have worse policies if Stuart's guesses at the end were true. It would possibly be overconfident about niceness, negotiations, democracy and supra-national institutions. On the other hand, it might be more cautious about developing nuclear weapons. So maybe it would be more vulnerable to nasty totalitarian surprises, but have a slighly better safety against nuclear GCRs.

As a non-historian I don't know how to properly judge historical what-ifs well: not only am I uncertain about how to analyse the counterfactual methodology itself, but I am uncertain about what historical data we need to know in order to do a proper counterfactual. But looking at how different worldviews depend on particular historical events and doing at least some estimate of how robust those events were, might indeed tell us a bit about where we might have ended up with contingent world-views.

In my own field of human enhancement ethics it is pretty clear that some of the halo effect of Nazism and its defeat in WWII led to a very strong negative value association that is relatively arbitrary but affects current policies. Had they been doing bad sociology instead we might have been decrying sinister social engineering, while happily selecting the genes of our children. If there had been an anti-USSR WWII the same might have happened.

Comment by Arenamontanus on If you don't know the name of the game, just tell me what I mean to you · 2010-10-27T13:20:49.307Z · LW · GW

It seems that the bargaining for mu will be dependent on your priors about what games will be played. That might help fix the initial mu-bargaining.

Comment by Arenamontanus on A Less Wrong singularity article? · 2009-11-19T18:36:18.017Z · LW · GW

I think this is very needed. When reviewing singularity models for a paper I wrote I could not find many readily citable references to certain areas that I know exist as "folklore". I don't like mentioning such ideas because it makes it look (to outsiders) as I have come up with them, and the insiders would likely think I was trying to steal credit.

There are whole fields like friendly AI theory that need a big review. Both to actually gather what has been understood, and in order to make it accessible to outsiders so that the community thinking about it can grow and deepen.

Whether this is a crowdsourcable project is another matter, but at the very least crowdsourcing raw input for later expert paper construction sounds like a good idea. I would expect that eventually it would need to boil down to one or two main authors doing most of the job, and a set of co-authors for speciality skills and prestige. But since this community is less driven by publish-or-perish and more by rationality concerns I expect ordering of co-authors may be less important.

Comment by Arenamontanus on A Less Wrong singularity article? · 2009-11-19T18:25:14.347Z · LW · GW

The way to an authoritative paper is not just to have the right co-authors but mainly having very good arguments, cover previous research well and ensure that it is out early in an emerging field. That way it will get cited and used. In fact, one strong reason to write this paper now is that if you don't do it, somebody else (and perhaps much worse) will do it.

Comment by Arenamontanus on News: Improbable Coincidence Slows LHC Repairs · 2009-11-09T17:00:59.310Z · LW · GW

Actually, if you do the experiment a number of times and always get suspicious hindrances, then you have good empirical evidence that something anthropic is going on... and that you likely have self-destroyed yourself in a lot of universes.

Comment by Arenamontanus on Shortness is now a treatable condition · 2009-10-22T01:02:06.575Z · LW · GW

The linked article is problematic. There is a pretty agreed on correlation between IQ and income (the image obscures this). In the case of wealth the article claims that there is a non-linear relationship that makes really smart people have a low wealth level. But this is due to the author fitting a third degree polynomial to the data! I am pretty convinced it is a case of overfitting. See my critique post for more details.

Comment by Arenamontanus on Shortness is now a treatable condition · 2009-10-22T00:56:55.955Z · LW · GW

There is one study that demonstrated that among top 1% SAT scorers investigated some years after testing, the upper quartile produces about twice the number of patents as the lower one (and about 6 times the average, if I remember right). That seems to imply that having more really top performers might produce more useful goods even if the vast majority of them never invent anything great.

Even a tiny shift upwards of everybody's IQ has a pretty impressive multiplicative effect at the high end.

Interpersonal skills are more important for job success than IQ, but I doubt great skills will produce goods useful across society in the same way as an invention does. A high EQ person probably just makes the local social network better, which has a relatively limited overall effect.

Comment by Arenamontanus on Shortness is now a treatable condition · 2009-10-22T00:50:28.785Z · LW · GW

I would be happy. The low end of the intelligence scale have on average pretty bad lives (higher risks of accidents, illness, crime, bad school outcomes, less income and lower life satisfaction), so on purely utilitarian grounds it would be good. But their inefficiency and costs also reduce the overall economy and cost a lot of tax money directly or indirectly. Hence I would be better off with them smarter - it might reduce my competitive advantage a bit, but I think the faster economic growth would balance that. A lot of our market value reside in our unique skills rather than general skills anyway.

Comment by Arenamontanus on Shortness is now a treatable condition · 2009-10-21T13:06:51.012Z · LW · GW

The definition of illness is one of the perennials in the philosophy of medicine. Robert Freitas has a nice list in the first chapter of Nanomedicine ( ) which is by no means exhaustive.

In practice, the typical "down-on-the-surgery-floor" approach is to judge whether a condition impairs "normal functioning". This is relative to everyday life and the kind of life the patient tries to live - and of course contains a lot of subjective judgements. Another good rule of thumb is that illness impairs the flexibility of someone - they have fewer possibilities.

Personally I prefer Freitas volitional model, where we give strong weight to the desires and goals of the patient. If I want to fly and could somehow be cured of weight, then that should be allowed. However, seeing medical interventions as allowed is not the same as claiming they have to be supported by everybody else (positive and negative rights and all that). There is much truth in saying that illness is what a society thinks we should be altruistic and pay for others, while health improvements beyond that tend to be up to the individual.

The problem is that altruism pool is limited (and quite possibly due to murky evolutionary psychology - consider Robin's "Showing that you care" paper) and shared resources limited, while the space of possible medical interventions is growing and human wants of course nearly unbounded. Hence there is a constant struggle for stakeholders to bring their conditions into the realm of altruism and obligatory treatment.

The problem is that we currently also roughly identify the category of illness treatments with allowable treatments (with some exceptions like preventative medicine, cosmetic surgery etc.) and the non-illness treatments as not allowed (doping, enhancement). This might be a reaction to rein in the costs and illness category, but also concerns that non-altruist medicine would be socially bad. I have strong suspicions this is misguided and actually decreases human happiness.

In the end, the goal of medicine should always be human flourishing, not health. Health is instrumental for living a good life, but what kind of health is needed depends very much on individual life projects. I believe that in the future we are going to see much more of a health pluralism.

Comment by Arenamontanus on Dying Outside · 2009-10-07T02:00:30.675Z · LW · GW

Sad news, but a very brave and positive response. If I ever end up in a comparable situation I wish I can handle it with this level of poise.

It is worth noting that people are far more flexible in what constitutes a life worth living than most normals believe. Brickman, Coates and Janoff-Bullman (1978) famously argued that individuals who had become paraplegic or quadriplegic within the previous year reported only slightly lower levels of life satisfaction than healthy individuals (and lottery winners also converged on their setpoint). This is particularly interesting given that many people often say they would rather be dead than quadriplegic. They are likely wrong.

It might be a good idea not just to train using tools such as BCI as a preparatory stage, but also to ensure that they get integrated with the action systems of your brain. There is some evidence that people with lock-in syndrome or similar conditions have a hard time learning to use them if they get them after their condition worsened, while people who get them before can use them better. The reason is (by some researchers) believed to be linked with the ability to see oneself as an agent - we normally reinforce this every waking moment, but if you can't use your agency to affect the world the agency might atrophy. Right now merely a hypothesis, but it might be a reason to attempt to supercharge your agency and extend it to an exoself.

Comment by Arenamontanus on Your Most Valuable Skill · 2009-09-29T22:56:06.775Z · LW · GW

I think my most valuable skill is my ability to build models of problems and systems. Not necessarily great and complete models, but at least something that encapsulates a bit of what seems to be going on and produces output that can be compared with the system. A few iterations of modelling/comparison/correction and I have usually at least learned something useful. It works both for napkin calculations or software simulations. It is a great tool for understanding many systems or checking intuitions.

Others have mentioned the skill of "letting go". I have trained myself the speciality of letting go of negative emotions, which has produced a very sunny (nearly pathologically positive) mood. Recognizing that most negative emotions like despair, sadness or anger don't solve the problem that caused them is very helpful. A positive, can-do mindset is a pretty useful asset. However, there are tricky trade-offs to be made between lowered ability to dwell on negatives plus optimism bias and the hedonic enjoyment of a good mood.

Comment by Arenamontanus on The Great Brain is Located Externally · 2009-06-26T17:54:50.933Z · LW · GW

When I visited Beijing a few years back, I could not access Wikipedia due to censorship. This made me aware of how often I unconsciously checked things on the site - the annoyance of not getting the page made me note a previously unseen offloading habit.

I expect that many offloading methods work like this. We do not notice that we use them, and that adds to their usefulness. They do not waste our attention or cognition. But it also means that we are less likely to critically examine the activity. Is the information reliable? Are we paying an acceptable price for it? Would a break in the access be problematic for our functioning? Does the offloading bias us in some way?

The last point might be particularly relevant here. Some resources provide information easily, so they tend to be used in favour of more cumbersome sources. Online papers are easy, trips to the library take time and effort - so we cite online papers more, even when original old sources are more appropriate. If it is problematic to check that a system is calibrated right and important to use it (that deadline is tonight! the customers are waiting!), we might become extensions of biased collective cognition.

Maybe we need to develop check-lists for checking our outsourced cognition?

Comment by Arenamontanus on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-19T16:03:45.548Z · LW · GW

One bias that I think is common among smart, academically minded people like us is that the value of intelligence is overestimated. I certainly think we have some pretty good objective reasons to believe intelligence is good, but we also add biases because we are a self-selected group with a high "need for cognition" trait, in a social environment that rewards cleverness of a particular kind. In the population at large the desire for more IQ is noticeably lower (and I get far more spam about Viagra than Modafinil!).

If I were on the Hypothetical Enhancement Grants Council, I think I would actually support enhancement of communication and cooperative ability slightly more than pure cognition. More cognitive bang for the buck if you can network a lot of minds.

Comment by Arenamontanus on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-19T15:55:52.069Z · LW · GW

This is why papers like H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 are relevant. This one looks at lagged data, trying to infer how much effect schooling, GDP and IQ at time t1 affects schooling, GDP and IQ at time t2.

The bane of this type of studies is of course the raw scores - how much cognitive ability is actually measured by school scores, surveys, IQ tests or whatever means that are used - and whether averages is telling us something important. One could imagine a model where extreme outliers were the real force of progress (I doubt this one, given that IQ does seem to correlate with a lot of desirable things and likely has network effects, but the data is likely not strong enough to rule out an outlier theory).

Comment by Arenamontanus on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T18:28:04.978Z · LW · GW

In many debates about cognition enhancement the claim is that it would be bad, because it would produce compounding effects - the rich would use it to get richer, producing a more unequal society. This claim hinges on the assumption that there would be an economic or social threshold to enhancer use, and that it would produce effects that were strongly in favour of just the individual taking the drug.

I think there is good reason to suspect that enhancement has positive externalities - lower costs due to stupidity, individual benefits that produce tax money, perhaps better governance, cooperation and more great ideas. In fact, it might be that these benefits are more powerful than the individual ones. If everybody got 1% smarter, we would not notice much improvement in everyday life, but the economy might grow a few percent and we would get slightly faster technological development and better governance. That might actually turn the problem into a free rider problem: unless you really want to be smarter taking the enhancer might be a cost to you (risk of side-effects, for example). So you might want everybody else to take the enhancers, and then reap the benefit without the cost.

Comment by Arenamontanus on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T18:18:58.019Z · LW · GW

The national/regional IQ literature is messy, because there are so many possible (and even likely) feedback loops between wealth, schooling, nutrition, IQ and GDP. Not to mention the rather emotional views of many people on the topic, as well as the lousy quality of some popular datasets. Lots of clever statistical methods have been used, and IQ seems to retain a fair chunk of explanatory weight even after other factors have been taken into account. Some papers have even looked at staggered data to see if IQ works as a predictor of future good effects, which it apparently does.

Whether it would be best to improve IQ, health or wealth directly depends not just on which has the biggest effect, but also on how easy it is and how the feedbacks work.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:51:13.368Z · LW · GW

Charles H. Hillman, Kirk I. Erickson & Arthur F. Kramer, Be smart, exercise your heart: exercise effects on brain and cognition, Nature Reviews Neuroscience 9, 58-65 (January 2008) especially suggest aerobic fitness training as being important.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:46:02.066Z · LW · GW

The Klingberg group in Sweden have done somewhat similar experiments, with positive results in children with or without ADHD. See their publications:

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:41:54.148Z · LW · GW

I found a power point from Kevin Warwick by googling for "Reading/Swatting -6" that included the data, but only lose references to the studies. I'll email him and ask.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:34:40.843Z · LW · GW

Even small glucose levels can apparently have significant effects. I have some papers in my library arguing that the memory-enhancing effects of adrenaline (which doesn't cross the blood brain barrier) are mediated by the glucose increase it causes. One of them demonstrated that a glucose-mimetic molecule also acted as an enhancer. Overall, the data seems pretty convincing that getting a suitable dose of glucose is enhancing, but the effect has an inverted-U curve - there is an individual and task dependent optimal level.

Overall, drug responses are very individual and we shouldn't expect enhancers to be "one size fits all". For me, modafinil + sleep deprivation produces a state that feels like sleep deprivation but is apparently quite functional (as measured by my ability to write software). Mileages may vary, indeed.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:28:21.238Z · LW · GW

The quick answer is that most stimulants make animals and people use well-learned stimulus-response responses more than considering the situation and figuring out an appropriate response, and often makes them impulsive - when it partially looks like a situation when you should do "A", the A response is hard to resist. A classic case was the US airforce friendly fire incident blamed on dexamphetamine. This is where the improved response inhibition of modafinil comes in. See

Turner DC, Clark L, Dowson J, Robbins TW, Sahakian BJ. Modafinil improves cognition and response inhibition in adult attention-deficit/hyperactivity disorder. Biol Psychiatry. 2004 May 15;55(10):1031-40.

Turner DC, Robbins TW, Clark L, Aron AR, Dowson J, Sahakian BJ. Cognitive enhancing effects of modafinil in healthy volunteers. Psychopharmacology (Berl). 2003 Jan;165(3):260-9. Epub 2002 Nov 1.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-18T12:15:58.276Z · LW · GW

I think this is a good idea, although SNPs might be overdoing it (for now; soon it will be cheap enough to sequence the whole genome and run whatever tests we like). There is a dearth of data on cognitive enhancers in real settings, and a real need to see what actually works for who and for what.

What I would like to see is volunteers testing themselves on a number of dimensions including IQ, working memory span, big 5 personality, ideally a bunch of biomarkers. In particular it would be good if we could get neurotransmitter levels, but to my knowledge there are no direct measurement methods which aren't invasive - there are a few indirect measures that may have some validity. Genotyping for cytochromes might be a good thing to check for pharmacogenomic effects.

In this ideal experiment, these volunteers would then run small online cognitive tests of various kinds every day, as well as enter comments (including side effects) into a medical blog. In an even more ideal experiment there would be extra data from life recording and in a complete dream world people would actually get bottles with placebo and the drug in mixed pills. One could imagine a site, "enhancement@home", that gathered these data and acted as a flexible privacy filter. People could use an API to data mine the reports and look for the effect, especially when controlling for various traits.

I think it is doable. I think it could be very helpful. I also think that developing this kind of "wikiepidemiology" would require some serious planning beforehand, to ensure the data collected actually is likely to tell something useful, not run afoul of bad laws (medical and drug use data does have some hefty regulations) and to make it appealing for everybody.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-17T00:16:45.117Z · LW · GW

(this is a rough sketch based on my research, which involves reviewing cognition enhancement literature)

Improving cognitive abilities can be done in a variety of ways, from excercise to drugs to computer games to asking clever people. The core question one should always ask is: what is my bottleneck? Usually there are a few faculties or traits that limit us the most, and these are the ones that ought to be dealt with first. Getting a better memory is useless if the real problem is lack of attention or the wrong priorities.

Training working memory using suitable software is probably one of the most useful enhancers around right now - cheap, safe, effect on core competencies.

When it comes to enhancement drugs, my top recommendations are: 1) sugar, 2) caffeine, 3) modafinil (and then comes a long list of other enhancers). Sugar is useful because it is effective, safe, legal and has well understood side effects. Just identify your optimum level and find a way of maintaining it (this requires training one's self-monitoring skills, always useful). For all drugs, there is a degree of personal variability one has to understand. Caffeine is similar, and mainly useful for reducing tiredness symptoms rather than boosting anything. Modafinil has some useful stimulant effects, appears reasonably safe and in particular doesn't seem to impair considered choice like amphetamines does. Metylphenidate may have its uses, but it depends on dopamine levels. Nicotine is a reasonable memory enhancer, as long as it is taken in a healthy form (gum, lozenges etc). Not sure piracetam actually works, and ginkgo biloba appears to be mostly a vasodilator (= good if you have circulatory problems, but perhaps not otherwise).

Healthy lifestyle matters. As I remarked in another comment, exercise has documented effect. It is rational to do not just for health but for cognition (so why don't I exercise? I need an anti-acrasia enhancement more than IQ!) Getting enough good sleep also improves performance as well as memory consolidation. Health in general appears to promote better cognition.

Learning to control one's mind is useful. A lot of people allow themselves to be distracted, annoyed or mentally sloppy. Doing a bit of internal cognitive behavioral therapy to identify bad ideas and behaviors, and fixing them, is a good idea. Easier said than done, but virtue is a habit. Just ask yourself whether you would like to have any given behavior as a habit, and then act accordingly (the nanoversion of the Categorical Imperative). Memory arts and other mental techniques can be useful, but are usually too specialized to be generally intelligence enhancing (I use memory arts just to remember grocery lists and ideas I get in the shower). Relaxation techniques (or just an awareness of how one works during stress) are very useful in everyday life.

Finally, the key thing is to exercise the mind by giving it challenges. Trite, but true. We tend to develop our skills best when we are at the edge of our abilities, not when the situation is routine or well-controlled. Hence attempting to climb higher cognitive mountains is both healthy and useful. If you have mastered special relativity, go for general relativity - or try to understand what poststructuralism really is about. As Drexler suggested on his blog, acquiring a broad knowledge of what exists in other fields is also useful.

One interesting finding shows that one's beliefs about the improvability of oneself strongly correlates with actual performance increases when training. I do not know whether this is true for all cognitive domains, but I wouldn't be too surprised if it was generally true.

Comment by Arenamontanus on Ask LessWrong: Human cognitive enhancement now? · 2009-06-16T23:52:01.923Z · LW · GW

Exercise has demonstrated good effect on memory and a bunch of other mental stats; the cause apepars to be the release of neural growth factors (and likely better circulation and general health).

Comment by Arenamontanus on Intelligence enhancement as existential risk mitigation · 2009-06-16T23:49:28.227Z · LW · GW

Yes, in many places nutrition is a low-hanging fruit. My own favorite example is iodine supplementation, but vitamins, long-chained fatty acids and simply enough nutrients to allow full development are also pretty good. There is some debate of how much of the Flynn effect of increasing IQ scores is due to nutrition (probably not all, but likely a good chunk). It is an achievable way of enhancing people without triggering the normal anti-enhancement opinions.

The main problem is that it is pretty long-term. The infants we save today will be putting their mark about two or more decades hence - they will not help us much with the problems we face before then. But this is a problem for most kinds of biological enhancement; developing it and getting people to accept it will take time. That is why gadgets are important - they diffuse much more rapidly.

Comment by Arenamontanus on Intelligence enhancement as existential risk mitigation · 2009-06-16T18:23:51.957Z · LW · GW

I have tried to research the economic benefits of cognition enhancement, and they are quite possibly substantial. But I think Roko is right about the wider political ramifications.

One relevant reference may be: H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 argues (using cross-lagged data) that education and cognitive ability has bigger positive effects on democracy, rule of law and political liberty than GDP. There are of course plenty of reciprocal factors.

As I argued below in my comment on consensus-formation, in many situations a slightly larger group of smart people might matter. The effect might be limited under certain circumstances (e.g. the existence of big enough non-truth seeking biased groups, like the pro-ethanol groups), but intelligence is something that acts across most of life - it will not just affect political behaviour but economic, social and cultural behaviour. That means that it will have multiple chances to affect society.

Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful. But intelligence can also co-opt noncognitive factors (e.g. a clever advertising campaign exploiting knowledge of cognitive biases to produce a desirable behavior).

Comment by Arenamontanus on Intelligence enhancement as existential risk mitigation · 2009-06-16T18:12:06.168Z · LW · GW

More intelligence means bigger scope for action, and more ability to get desired outcomes. Whether more intelligence increases risk depends on the distribution of accidentally bad outcomes in the new scope (and how many old bad outcomes can be avoided), and whether people will do malign things. On average very few people seem to be malign, so the main issue is likely more the issue of new risks.

Looking at the great deliberate human-made disasters of the past suggests that they were often more of a systemic nature (societies allowing nasty people or social processes to run their course; e.g. democides and genocides) than due to individuals or groups successfully breaking rules (e.g. terrorism). This is actually a reason to support cognitive enhancement if it can produce more resilient societies less prone to systemic risks.

Comment by Arenamontanus on Intelligence enhancement as existential risk mitigation · 2009-06-16T18:01:50.381Z · LW · GW

Here is a simple model. Assume you need a certain intelligence to understand a crucial, policy-affecting idea (we can make this a fuzzy border and talk about this in distribution to make it more realistic later). If you are below this level your policy choices will depend on taking up plausible-sounding arguments from others, but it will be uncorrelated to the truth. Left alone such a population will describe some form of random walk with amplification, ending up with a random decision. If you are above the critical level your views will be somewhat correlated with the truth. Since you are affecting others when you engage in political discourse, whether over the breakfast table or on TV, you will have an impact on other people, increasing their chance of agreeing with you. This biases the random walk of public opinion slightly in favour of truth.

From most models of political agreement formation I have seen, even a pretty small minority that get biased in a certain direction can sway a large group that just picks views based on neighbours. This would suggest that increasing the set of people smart enough to get the truth would substantially increase the likelihood of a correct group decision.