Intelligence enhancement as existential risk mitigation
post by Roko · 2009-06-15T19:35:07.530Z · LW · GW · Legacy · 244 commentsContents
244 comments
Here at Less Wrong, the Future of Humanity Institute and the Singularity Institute, a recurring theme is trying to steer the future of the planet away from disaster. Often, the best way to avert a particular disaster is quite hard for ordinary people to understand as it requires one to think through an argument in a cool, unemotional way; more often than not the best solution will be lost in a mass of low signal-to-noise ratio squabbling and/or emoting. Whatever the substance of the debate, the overall meta-problem is quite well captured by this catch from this month's rationality quotes:
"People are mostly sane enough, of course, in the affairs of common life: the getting of food, shelter, and so on. But the moment they attempt any depth or generality of thought, they go mad almost infallibly.
Attempting to target the meta-problem of getting people to be slightly less mad when it comes to abstract or general thought, especially public policy, is a tempting option. Robin Hanson's futarchy proposal is one way to combat this madness (which it does by removing most people from the policymaking loop). However, another important route to combating human idiocy is to find technologies that make humans smarter. Nick Bostrom proposed that we should work hard looking for ways to enhance the cognition of research scientists, because even a small increase in the average intelligence of research scientists would increase research output by a large amount, as there are lots of scientists. But improving the decisionmaking process of our society would probably have an even more profound effect; if we could improve the intelligence of the average voter by about one standard deviation, it is easy to speculate that the political decisionmaking process would work much better. For example, understanding simple logical arguments and simple quantitative analyses is stretching the capabilities of someone at IQ 100, so it seems that the marginal effect of overall IQ increases would be quite a large marginal increases in the probability that a politician was incentivized to focus on a logical argument over an emotionally appealing slander as the main focus of their campaign.
As a concrete example, consider the initial US reaction to rising oil prices and the need for US-produced energy: pushing corn ethanol, because a strong farming lobby liked the idea of having extra revenue. Now, if the *average voter* could understand the concept of photosynthetic efficiency, and could understand a simple numerical calculation showing how inefficient corn is at converting solar energy to stored energy in ethanol, this policy choice would have been dead in the water. But the average voter cannot do simple physics, whereas they can understand the emotional appeal of "support our local farmers!". Even today, there are still politicians who defend corn ethanol because they want to pander to local interest groups. Another concrete example is some of the more useless responses that the UK public has been engaging in - and being encouraged to engage in - to prevent global warming. People were encouraged to unplug their mobile phone chargers when the chargers weren't being used. David McKay had to wage a personal war against such idiocy - see this Guardian article. The universal response to my criticism of people advocating this was "it all adds up!". I quote:
There's a lack of numeracy in the public discussion of energy. Where people do use numbers, they select them to sound big and score points in arguments, rather than to aid thoughtful discussion.
Toby Ord has a project on efficient charity, he has worked out that the difference in outcomes per dollar for alleviating human suffering in Africa can vary by 3 orders of magnitude. But most people in the developed world don't know what an "order of magnitude" is, or why it is a useful concept. This efficient charity concept demonstrated that the derivative
d(Outcomes)/d(Average IQ)
may be extremely large, and may be subject to powerful threshold effects. In this case, there is probably an average IQ threshold above which the average person can easily understand the concept of efficient charity, and thus all the money gets given to the most efficient charities, and the amount of suffering-alleviation in Africa goes up by a factor of 1000, even though the average IQ of the donor community may only have jumped from 100 to 140, say.
It may well be the case that finding a cognitive enhancer suitable for general use is the best way to tackle the diverse array of risks we face. People with enhanced IQ would also probably find it easier (and be more willing) to absorb cognitive biases material; to see this, try and explain the concept of "cognitive biases" to someone who is unlucky enough to be of below average IQ, and then go an explain it to someone who is smarter than you. It is certainly the case that even people of below average IQ *do sometimes*, in favourable circumstances, take note of quantitative rational arguments, but in the maelstrom of politics such quantitative analyses get eaten alive by more emotive arguments like "SUPPORT OUR FARMERS!" or "SUPPORT OUR TROOPS!" or "EVOLUTION IS ONLY A THEORY!" or "IT ALL ADDS UP!".
244 comments
Comments sorted by top scores.
comment by Psychohistorian · 2009-06-16T01:40:16.568Z · LW(p) · GW(p)
I think that most people who do not have severe cognitive deficiencies are capable of understanding what "efficient charities" are. I think that most people are quite capable of understanding the statement, "Ethanol will waste a lot of money and will still generate as much (or more) pollution than gasoline. To top it off, it will also raise the price of food products, both for you and for people who will actually starve as a result." Most issues like this, one can figure out what's going on by reading wikipedia for half an hour. Perhaps that takes a high IQ, but from my experience, when people are given clear and accurate arguments, they are generally capable of getting them. The problem is that they never bother seeking out decent arguments. They either just don't care, or they seek out arguments that support whatever their beliefs happen to be.
In other words, the problem is not that people are stupid. The problem is that people simply don't give a damn. If you don't fix that, I doubt raising IQ will be anywhere near as helpful as you may think.
Replies from: Roko, Roko, RobinHanson, komponisto, SoullessAutomaton↑ comment by Roko · 2009-06-16T13:19:21.884Z · LW(p) · GW(p)
I think that most people who do not have severe cognitive deficiencies are capable of understanding what "efficient charities" are
I have specific empirical evidence against this point from attempting to convince people on facebook causes to instead support more efficient causes. I am considering a top-level post on it.
Replies from: MichaelBishop, RobinHanson, Psychohistorian↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T21:40:49.938Z · LW(p) · GW(p)
More detail would be wonderful!
↑ comment by RobinHanson · 2009-06-17T03:24:02.343Z · LW(p) · GW(p)
Well be sure to clarify if they were really trying to understand. People who do not want to understand can look a lot like people unable to understand.
Replies from: Roko↑ comment by Psychohistorian · 2009-06-16T18:53:13.657Z · LW(p) · GW(p)
Does this mean they don't understand, they don't care, or they don't share your utility function? The fact that they disagree does not mean they don't understand.
Replies from: Roko↑ comment by Roko · 2009-06-16T09:10:15.667Z · LW(p) · GW(p)
If a less intelligent person is presented with correct and only correct arguments, they may have a higher probability of voting in accordance with them. But often in reality they will be presented with "fake" arguments, especially by naughty politicians or religious leaders. For example, arguments like "evolution is only a theory" that are specifically designed to be persuasive without being true. Intelligence is required to tell the difference.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-16T18:44:41.304Z · LW(p) · GW(p)
Intelligence, while useful, isn't what's required in that scenario.
Skepticism, curiosity, and intellectual integrity are.
Replies from: Roko↑ comment by Roko · 2009-06-16T22:20:05.882Z · LW(p) · GW(p)
I claim that intelligence - specifically IQ - helps people to tell the difference between sophistry and genuine arguments. This seems reasonable to me.
Replies from: Eliezer_Yudkowsky, Annoyance↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-17T08:59:16.255Z · LW(p) · GW(p)
I'm not sure this is correct. I might endorse a statement to the effect that fluid g is necessary in order to learn the more advanced skills of distinguishing sophistry, but very few people, high-g or otherwise, actually learn such a skill.
Replies from: Roko↑ comment by Roko · 2009-06-17T16:23:33.121Z · LW(p) · GW(p)
It is probably possible for a less intelligent person to learn (to some extent) to distinguish sophistry from solid argument. But it is much easier and comes more naturally for a smart person.
The amount of mental determination required to do a task decreases as the task becomes easier; the amount of ability required to perform a task decreases as the determination to succeed increases.
I suspect that it will be easier to persuade people to take a pill that makes them smarter than to persuade them to spend months or years studying critical thinking.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T19:24:48.602Z · LW(p) · GW(p)
But it is much easier and comes more naturally for a smart person
No, no, no! It's harder and more difficult for a smart person to learn this, because they're so much better at producing clever rationalizations to explain away their cognitive dissonance.
The person it's easiest to fool is yourself, and the more IQ you have, the better you are at coming up with really convincing stupidity. Recognizing valid reasoning and forcing yourself to adhere to the standards that define it requires something IQ doesn't measure.
↑ comment by Annoyance · 2009-06-17T19:21:52.068Z · LW(p) · GW(p)
"Seems reasonable" is not a valid criterion for judgment.
Your claim is factually incorrect. The ability to tell the difference between sophistry and valid arguments rests on two things: first, awareness of the standards of validity, and second, the capacity to override the convictions that come from our associative thinking and evaluate the situation rationally.
High IQ permits people to come up with very complex and sophisticated rationalizations. It doesn't help them distinguish rationalization from rationality.
↑ comment by RobinHanson · 2009-06-17T03:24:57.678Z · LW(p) · GW(p)
Yes, this is the key problem that people don't really want to understand. That is the problem futarchy is intended to solve.
↑ comment by komponisto · 2009-06-16T03:18:18.272Z · LW(p) · GW(p)
Agree. Or, one might say: the problem is not so much one of intelligence as one of (surprise!) rationality.
Replies from: CronoDAS, Roko↑ comment by CronoDAS · 2009-06-16T05:48:13.942Z · LW(p) · GW(p)
Ditto... although being ill-informed can't help either.
I once heard a certain political figure speak at a university. He said that when he gave speeches in areas in which the majority supported his political party, explaining what problems he was trying to solve, they would simply react as a supportive audience - but when he gave speeches in areas where his party was unpopular, they also approved of him, saying that they were horrified and angry because nobody had ever told them about this problem before. He concluded by saying that a Republican is a Democrat who doesn't know what's going on.
More disturbingly, giving someone a list of falsehoods often causes people to later remember them as being true. (See also this Eliezer post.)
↑ comment by SoullessAutomaton · 2009-06-16T22:41:03.398Z · LW(p) · GW(p)
Ethanol will waste a lot of money and will still generate as much (or more) pollution than gasoline.
As an aside, given that the pollutant du jour is atmospheric carbon, it's worth noting that burning ethanol is essentially carbon neutral. Ethanol also means not being dependent on foreign nations run by crazy religious whackjobs.
Not that corn ethanol is a good idea overall, but it does have points to recommend it vs. petroleum. It just has... a lot of points to disrecommend it in general.
Replies from: Jack↑ comment by Jack · 2009-06-17T02:55:44.726Z · LW(p) · GW(p)
Thing is liquid fuel (oil or otherwise) is traded on a mostly open, global, market. So the price of ethanol and gasoline are both dictated to a large extent by the supply produced by OPEC. So producing ethanol can only make us "independent" in the sense that it makes OPEC/Venezuelan/Russian oil a smaller fraction of global energy production. But as long as those sources constitute a decent fraction of global production (and you can't grow enough ethanol to change that) they can still drive up prices on a whim.
I also think the jury is still out on the carbon neutrality thing (at least as far as I can tell surveying the research)- especially if you were already growing corn or had to chop down a rain forest to get your farm land.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-06-17T10:06:04.751Z · LW(p) · GW(p)
Seriously? Do we actually trade fuel ethanol? Why would anyone want it, and if they did, how could we compete with Brazil's sugar cane ethanol? How odd. My impression is that the only reason corn ethanol is as cheap as gas is because of huge government subsidies to the corn growers. Perhaps the whole situation is even sillier than I realized.
Also, I think the main idea of "energy independence" is that, if it came down to it, we could switch to completely internal energy sources and tell the rest of the world to go shove it. It'd be a diplomatic stick to beat people with in the sense of "we don't need you", not an otherwise significant on-going economic factor.
I also think the jury is still out on the carbon neutrality thing (at least as far as I can tell surveying the research)- especially if you were already growing corn or had to chop down a rain forest to get your farm land.
The carbon in the ethanol was extracted from the atmosphere in the past couple years, burning it releases it back. That's pretty much the definition of carbon-neutral. Pretty much the same should be the case for anything else you do with the corn, including eating it. But, you're right, there are other considerations. Chopping down rainforest is absolutely not carbon-neutral.
Replies from: Jack↑ comment by Jack · 2009-06-17T16:36:40.870Z · LW(p) · GW(p)
My point is just that the whole energy independence thing is a red herring since energy is traded on an open market. If we suddenly had to depend only on energy produced in the U.S. the resulting price increases would be prohibitive for everyone but the military (and we already have strategic oil reserves).
The whole concept of energy independence is a political cudgel to turn energy politics into security politics by taking advantage of people's mercantilist intuitions about resources. But in a global free market those intuitions are wrong. Strictly speaking you don't get to decide where your energy comes from. Often it is cheaper to get it from nearby sources but it is all part of the same pricing system. So yeah, increased domestic production might make us more independent in the sense that foreign countries won't have quite the same ability to knock prices up but there isn't a magic line where suddenly we're "independent".
Lets say we have 100 oil. 50 of it is produced in the Middle East and the U.S. uses 40 but currently produces 20. The remaining 30% is produced by other countries. An OPEC embargo leaves us with 50 oil about doubling the price (for convenience, usually supply curves are exponential but I'm not an economist I don't know what the price would really do beyond going up, a lot.). If the U.S. increases production to 40% prices pre-embargo will be lower (since we have a supply of 120) but prices will still increase by a similar amount when we lose 50 of that 120 supply leaving us with only 70 oil. If the whole world turned against us prices would triple (according to our invented price model) as we would have only 40 in supply where once we had 120.
In order to make it so the rest of the world really couldn't affect us we'd have to be producing a preponderance of the world's energy such that foreign embargoes would only slightly affect total supply. Either that or we embargo foreign energy imports. But we'd be independent like chopping off your legs makes you independent of bicycles. Also, I imagine if our transportation infrastructure was such that it didn't use gasoline we'd be much better insulated from price shifts in foreign oil. Like if our cars were all solar powered or if we were taxing CO2 to a prohibitive extent already. But in those cases technology has either rendered oil irrelevant or our economy has already internalized the cost of a foreign oil embargo.
Hopefully I'm making some sense.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-06-18T01:19:22.468Z · LW(p) · GW(p)
Hopefully I'm making some sense.
You are. I obviously haven't thought the issue through clearly and it's not something I care deeply about, since ethanol is silly for many other, larger reasons, therefore I won't waste your time with further questions. Definitely moving the whole "energy independence" thing to the "I have no idea" column for now.
Replies from: saturn↑ comment by saturn · 2009-06-18T09:04:30.096Z · LW(p) · GW(p)
In short, we're "independent" when the cost of importing fossil fuels exceeds the cost of domestic production. The two ways this can happen are drastic technological improvements, and subsidies/tariffs on fuel resulting in economically wasteful overproduction. In the US we tend to give lip service to the former while actually implementing the latter.
comment by JamesAndrix · 2009-06-16T20:29:51.432Z · LW(p) · GW(p)
If you are surrounded by money pumps, is it rational to bet with them, or correct their functioning?
Replies from: Roko↑ comment by Roko · 2009-06-16T22:13:19.351Z · LW(p) · GW(p)
This is a good point.
Instead of worrying about existential risks, I could join a good startup team/start my own startup and aim to make lots of money.
However, what would happen if the world got through all of these challenges to a positive post singularity world? People would ask questions such as: "why did you not put effort into existential risk mitigation when you knew the dangers?" I would ask that question.
So the answer to the question "is it rational" depends upon your goals.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2009-06-17T01:02:24.930Z · LW(p) · GW(p)
That's not quite what I meant, but I do believe Eliezer has advised anyone not working on 'risks' (or FAI?) to make as much money as they can and contribute to an organization that is.
What I meant was that given a money pump, the straightforward thing to do is to pump it, not fix it in the hope that it will somehow benefit humanity.
It seems to me that on LW, most believe that people are irrational in ways that should make people money pumps, but the reaction to this is to make extreme efforts to persuade people of things.
Improving someone's intelligence or rationality is difficult if they're not already looking, but channeling away some of their funds or political capital will lessen the impact their irrationality can have.
Replies from: orthonormal, Annoyance↑ comment by orthonormal · 2009-06-17T01:34:27.335Z · LW(p) · GW(p)
most believe that people are irrational in ways that should make people money pumps
I think that most people are (unconsciously) rational enough to avoid being financially exploited in any but the obvious known ways (lotteries, advertising affecting preferences, etc) for which there's already a competitive market or regulation. Aside from those avenues, most people are too wary of being cheated to make good money pumps.
What's not being covered as much are the ways that people irrationally contribute to the destruction of wealth and utility, by voting for ineffective or detrimental policies or by spreading harmful memes. (People don't typically have an evolved horror of doing these things.) A push towards rationality on those fronts can help everyone.
If you disagree, do you have a particularly effective money-pump in mind? (Of course you might hesitate to share it.)
↑ comment by Annoyance · 2009-06-19T15:27:53.310Z · LW(p) · GW(p)
That's not quite what I meant, but I do believe Eliezer has advised anyone not working on 'risks' (or FAI?) to make as much money as they can and contribute to an organization that is.
So he's chosen the "work the money pumps" option, then, rather than trying to correct them.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2009-06-19T16:25:55.444Z · LW(p) · GW(p)
If you believe that he believes that this is not the best use of that persons anti-risk time, then yes.
But I think he's sincerely looking for the best way to mitigate those risks.
comment by RolfAndreassen · 2009-06-16T19:52:45.093Z · LW(p) · GW(p)
I do not know if this strategy would apply in the Western world; but in Africa, I think much could be gained simply by nutritional intervention. IQ is, to a good approximation, 50% genetic; much of the environmental effect is childhood nutrition; it follows that widespread distribution of vitamins might have a nice effect over time, in addition to the more usual benefits. It also seems possible that this might work in those strata of the American population that subsist mainly on fast food, although I expect the effect would be less - likely there are already patchwork government programs that distribute vitamins to the poor.
Replies from: Arenamontanus, Roko↑ comment by Arenamontanus · 2009-06-16T23:49:28.227Z · LW(p) · GW(p)
Yes, in many places nutrition is a low-hanging fruit. My own favorite example is iodine supplementation, http://www.practicalethicsnews.com/practicalethics/2008/12/the-perfect-cog.html but vitamins, long-chained fatty acids and simply enough nutrients to allow full development are also pretty good. There is some debate of how much of the Flynn effect of increasing IQ scores is due to nutrition (probably not all, but likely a good chunk). It is an achievable way of enhancing people without triggering the normal anti-enhancement opinions.
The main problem is that it is pretty long-term. The infants we save today will be putting their mark about two or more decades hence - they will not help us much with the problems we face before then. But this is a problem for most kinds of biological enhancement; developing it and getting people to accept it will take time. That is why gadgets are important - they diffuse much more rapidly.
comment by derekz · 2009-06-15T20:11:09.731Z · LW(p) · GW(p)
I suppose the question is not whether it would be good, but rather how. Some quick brainstorming:
I think people are "smarter" now then they were, say, pre-scientific-method. So there may be more trainable ways-of-thinking that we can learn (for example, "best practices" for qualitative Bayesianism)
Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to make sure you don't forget it, or prime association formation at a later time. Or some kind of software aid to "stack unwinding" so you don't go to sleep with 46 tabs open in your web browser. Or some short-term memory aid that works better than scratch paper. Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI. Or taking a cutting-edge knowledge representation framework like Novamente's PLN and trying to enter stuff into it as an "active" note-taking system.
Collaboration tools -- shared versions of the above ideas, or n-way telephone conversations, or freeform "chatroom"-style whiteboards or iteratively-refined debate thesis statements, or lesswrong.com
Man-machine hybrids. Like having people act as the utility function or search-order-control of an automated search process.
Of course, neural prostheses may become possible at some point fairly soon. Specially-tailored virtual environments to aid in visualization (like of nanofactories), or other detailed and accurate scientific simulations allowing for quick exploration of ideas... "Do What I Mean" interfaces to CAD programs might be possible if we can get a handle on the functional properties of human cognitive machinery...
Replies from: gwern, asciilifeform, Roko↑ comment by gwern · 2009-06-17T03:29:20.557Z · LW(p) · GW(p)
Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to make sure you don't forget it, or prime association formation at a later time.
Congratulations, you've nearly reinvented spaced repetition! There is a great deal of writing on spaced repetition flashcard systems, so I won't inflict upon you my own writings; but the Wikipedia article will link you to the main programs (Anki, Mnemosyne, and SuperMemo) and some writeups of the topic. SR is a great technique; I love it dearly.
Or some short-term memory aid that works better than scratch paper.
Well, you could just improve your working memory. Unusually, working memory is plastic enough to be trainable by WM tasks. The WM exercise I'm most familiar with is Dual n-back. I practice it, but while I have noticed improvements, I'm unsure whether they repay the time I've put into it; SR systems have proven themselves as far as I'm concerned, but the jury is still out on dual n-back.
Or taking a cutting-edge knowledge representation framework like Novamente's PLN and trying to enter stuff into it as an "active" note-taking system.
Now that sounds interesting. But looking at this OpenCog link doesn't give me a good idea as to what PLN might do for note-taking (or really, in general); did you have any use-cases or examples?
Replies from: derekz↑ comment by derekz · 2009-06-17T04:18:17.321Z · LW(p) · GW(p)
No specific use cases or examples, just throwing out ideas. On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit. Now OpenCog is supposed by its creators to be a fully general knowledge representation system so maybe it's possible to use it as a sort of notation (like a probabilistic-logic version of mathematica? or maybe with a natural language front end of some kind? i think Ben Goertzel likes lojban so maybe an intermediate language like that)
Anyway, it's not really a product spec just one possible sort of way someday to use machines to make people smarter.
(but that was before I realized we were talking about pills to make people stop liking their favorite tv shows, heh)
Replies from: Henrik_Jonsson↑ comment by Henrik_Jonsson · 2009-06-17T05:51:42.493Z · LW(p) · GW(p)
On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit.
While I agree that it it would be cool, anything that doesn't keep your notes exactly like you left them is likely to be more annoying than productive unless it is very cleverly done. (Remember Microsoft Clippy?) You'd probably need to tag at least some things, like persons and places.
↑ comment by asciilifeform · 2009-06-15T20:23:45.130Z · LW(p) · GW(p)
Software programs for individuals.... prime association formation at a later time.... some short-term memory aid that works better than scratch paper
I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally.
Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI
About six months ago, I resolved to do exactly that. While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the language and development environment are currently are in a class of their own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)
Replies from: SilasBarta, derekz, derekz, Roko↑ comment by SilasBarta · 2009-06-17T22:52:45.875Z · LW(p) · GW(p)
Could you give more examples about things you like about Mathematica? Years ago, I resolved to become an expert at it after reading A New Kind of Science (will you guys forgive me?) and like it for a while, but then noticed some things were needlessly complicated or refused to spit out the right results (long time ago so I can't give examples).
Btw, I learned about Lisp after Mathematica, and was like, "wow, that must have been where Wolfram got the idea."
Replies from: asciilifeform↑ comment by asciilifeform · 2009-06-18T19:18:51.143Z · LW(p) · GW(p)
Could you give more examples about things you like about Mathematica?
1) Mathematica's programming language does not confine you to a particular style of thinking. If you are a Lisp fancier, you can write entirely Lispy code. Likewise Haskell. There is even a capability for relatively painless dataflow programming.
2) Wolfram Inc. took great pains to make interfacing with the outside world from within the app as seamless as possible. For example, you can suck in a spreadsheet file directly into a multidimensional array. There is import and export capability for hundreds of formats, including obscure scientific and engineering ones. In case the built-in formats do not suffice, defining custom ones is surprisingly easy.
3) A non-headache-inducing replacement for regular expressions. Enough said.
4) Graphical objects (likewise audio and other streams) are first-class data types. They are able to appear as both the inputs and outputs of functions.
5) Lastly, and most importantly: fully interactive program development. The rest of the programming universe lives a life of endlessly repeated "compile and pray" cycles. Mathematica permits you to meaningfully evaluate and edit in place every line of code you write. I am otherwise an Emacs junkie, yet I have never felt the slightest desire to touch Emacs when working on Mathematica code. The programmer's traditional need to wade through and shovel giant piles of text from one place to another while writing code is almost entirely absent when working in this language.
The downsides of Mathematica (slow, proprietary, expensive, etc.) are widely known. Thus far, the advantages have vastly outweighed the problems for my particular kind of work. However, I have found that I now feel extremely confined when forced to work in any other programming language. Perhaps this risk should be added to the list of disadvantages.
I learned about Lisp after Mathematica, and was like, "wow, that must have been where Wolfram got the idea."
Wolfram had (at least in the early days of Mathematica) a very interesting relationship with Lisp. He seems to have initially rejected many of its ideas, but it is clear that they somehow crept back into his work as time went by.
↑ comment by derekz · 2009-06-16T15:02:34.758Z · LW(p) · GW(p)
Thanks for the motivation, by the way -- I have toyed with the idea of getting Mathematica many times in the past but the $2500 price tag dissuaded me. Now I see that they have a $295 "Home Edition", which is basically the full product for personal use. I bought it last night and started playing with it. Very nifty program.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-06-17T22:49:20.807Z · LW(p) · GW(p)
I don't know wheter to applaud your ethical restraint, or pity your ignorance. I'll go with the first ;-)
Replies from: derekz↑ comment by derekz · 2009-06-17T23:28:12.270Z · LW(p) · GW(p)
If you're wondering whether I'm aware that I can figure out how to steal software licenses, I am.
ETA: I don't condemn those who believe that intellectual property rights are bad for society or immoral. I don't feel that way myself, though, so I act accordingly.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-06-18T03:25:50.525Z · LW(p) · GW(p)
It's theoretically possible to believe in IP (on some level), but lack the will not to pluck the forbidden fruit.
↑ comment by Roko · 2009-06-15T20:27:58.032Z · LW(p) · GW(p)
While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the development environment is currently in a class of its own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)
Sounds cool, but this is not quite what I was aiming at.
Replies from: asciilifeform, derekz↑ comment by asciilifeform · 2009-06-15T21:17:11.676Z · LW(p) · GW(p)
not quite what I was aiming at
I am curious what you had in mind. Please elaborate.
Replies from: Roko↑ comment by Roko · 2009-06-15T23:43:20.774Z · LW(p) · GW(p)
I had in mind average Joe the truck driver who cannot understand an argument like "Corn ethanol is a bad idea because the energy conversion efficiency of corn plants is extremely low, so the energy output of the process, including all the farming and processing, may be negative", but who instead falls victim to "Corn ethanol is good because you should SUPPORT OUR FARMERS!"
You're talking about enhancing the efficiency of the smartest people (like you), I'm talking about enhancing the efficiency of the average person.
↑ comment by derekz · 2009-06-15T20:38:00.398Z · LW(p) · GW(p)
Well if you are really only interested in raising the average person's "IQ" by 10 points, it's pretty hard to change human nature (so maybe Bostrom was on the right track).
Perhaps if somehow video games could embed some lesson about rationality in amongst the dumb slaughter, that could help a little -- but people would probably just buy the games without the boring stuff instead.
↑ comment by Roko · 2009-06-15T23:57:07.353Z · LW(p) · GW(p)
The problem with all of these is that they are all likely to be adopted mostly by the minority of people who are already very smart, whereas this post is aiming at something for the average intelligence people who comprise the majority of the population.
Replies from: asciilifeform↑ comment by asciilifeform · 2009-06-16T00:38:19.079Z · LW(p) · GW(p)
I cannot pin down this idea as rigorously as I would like, but there seems to exist such a trait as liking to think abstractly, and that this trait is mostly orthogonal to IQ as we understand it (although a "you must be this tall to ride" effect applies.) With that in mind, I do not think that any but the most outlandishly powerful and at the same time effortless intelligence amplifier will be of much interest to the bulk of the population.
Replies from: Roko, arundelo↑ comment by Roko · 2009-06-16T13:29:30.239Z · LW(p) · GW(p)
I did not address the issue of actually getting people to take cognitive enhancers in my post. It is a huge can of worms that would take at least a whole post to get into. Let's concentrate on the hypothetical here: IF we could get people to do this, then it would be a good thing.
Replies from: derekz↑ comment by derekz · 2009-06-16T13:54:41.589Z · LW(p) · GW(p)
I'm still baffled about what you are getting at here. Apparently training people to think better is too hard for you, so I guess you want a pill or something. But there is no evidence that any pill can raise the average person's IQ by 10 points (which kind of makes sense, if some simple chemical balance adjustment could have such a dramatic effect on fitness it would be quite surprising). Are you researching a sci fi novel or something? What good does wishing for magical pills do?
Replies from: Roko, asciilifeform↑ comment by Roko · 2009-06-16T14:09:46.489Z · LW(p) · GW(p)
But there is no evidence that any pill can raise the average person's IQ by 10 points . Are you researching a sci fi novel or something? What good does wishing for magical pills do?
Well we haven't looked very hard, and I am trying to advocate that more research is urgently needed in this area, along with people like Nick Bostrom.
(which kind of makes sense, if some simple chemical balance adjustment could have such a dramatic effect on fitness it would be quite surprising)
See The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement
"a greater level of mental activity might also enable us to apply our brains more effectively to process information and solve problems. The brain, however, requires extra energy when we exert mental effort, reducing the normally tightly regulated blood glucose level by about 5 per cent (0.2 mmol/l) for short (<15 min) efforts and more for longer exertions.¹⁵ Conversely, increasing blood glucose levels has been shown to improve cognitive performance in demanding tasks."
Replies from: derekz↑ comment by derekz · 2009-06-16T14:48:48.455Z · LW(p) · GW(p)
If the point of this essay was to advocate pharmaceutical research, it might have been more effective to say so, it would have made the process of digesting it smoother. Given the other responses I think I am not alone in failing to guess that this was pretty much your sole target.
I don't object to such research; a Bostrom article saying "it might not be impossible to have some effect" is weak support for a 10 IQ point avergage-gain pill, but that's not a reason to avoid looking for one. Never know what you'll find. I'm still not clear what the takeaway from this essay is for a lesswrong reader, though, unless it is to suggest that we should experiment ourselves with the available chemicals.
I've tried many of the ones that are obtainable. Despite its popularity, I found piracetam to have no noticeable effect even after taking it for extended periods of time. Modafinil is the most noticeable of all; it doesn't seem to do much for me while I'm well-rested but does remove some of the sluggishness that can come with fatigue, although I think the results on an IQ test would be unnoticeable (maybe a 6 hour test, something to highlight endurance, could show a measurable difference). Picamilone has a subtler effect that I'm not sure how to characterize. I'm thinking of trying Xanthinol NIcotinate, but have not yet done so. Because of the small effects I do not use these things as a component of my general lifestyle, both for money reasons and the general uncertainty of long-term effects (also mild but sometimes unpleasant side effects). The effects of other more common drugs like caffeine and other stimulants are probably stronger than any of the "weird" stuff, and are widely known. Thinking beyond IQ, there are of course many drugs with cognitive effects that could be useful on an occasional-use basis, but that's beyond the scope of this discussion.
Replies from: Roko↑ comment by Roko · 2009-06-16T15:56:00.013Z · LW(p) · GW(p)
If the point of this essay was to advocate pharmaceutical research, it might have been more effective to say so, it would have made the process of digesting it smoother. Given the other responses I think I am not alone in failing to guess that this was pretty much your sole target.
Well, there may be tactics other than pharmacology: we might have nutritional interventions or perhaps something like transcranial magnetic stimulation, or even something we haven't thought of yet.
But I should emphasize that the sole criterion for such interventions would be that it would be feasible to get lots of people to use them.
I'm still not clear what the takeaway from this essay is for a lesswrong reader, though,
This article is not a "here's something you can do to enhance your own life today!" type article, it is a discussion of existential risk reduction via mass IQ increase. I may well write some "how to" articles, too though.
↑ comment by asciilifeform · 2009-06-16T14:05:06.765Z · LW(p) · GW(p)
But there is no evidence that any pill can raise the average person's IQ by 10 points
Please read this short review of the state of the art of chemical intelligence enhancement.
We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads.
if some simple chemical balance adjustment could have such a dramatic effect on fitness
Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of today; prehistoric resource constraints (let's pick, for instance, the scarcity of refined sugars) bear no resemblance to those of today; certain refinements may be trivial from the standpoint of modern engineering but inaccessible to biological evolution, or at the very least ended up unreachable from a particular local maximum. Consider, for example, the rarity of evolved wheels.
↑ comment by arundelo · 2009-06-16T07:02:17.080Z · LW(p) · GW(p)
I think this is called need for cognition. (I first saw this phrase somewhere here on LW.)
comment by Arenamontanus · 2009-06-16T18:23:51.957Z · LW(p) · GW(p)
I have tried to research the economic benefits of cognition enhancement, and they are quite possibly substantial. But I think Roko is right about the wider political ramifications.
One relevant reference may be: H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 argues (using cross-lagged data) that education and cognitive ability has bigger positive effects on democracy, rule of law and political liberty than GDP. There are of course plenty of reciprocal factors.
As I argued below in my comment on consensus-formation, in many situations a slightly larger group of smart people might matter. The effect might be limited under certain circumstances (e.g. the existence of big enough non-truth seeking biased groups, like the pro-ethanol groups), but intelligence is something that acts across most of life - it will not just affect political behaviour but economic, social and cultural behaviour. That means that it will have multiple chances to affect society.
Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful. But intelligence can also co-opt noncognitive factors (e.g. a clever advertising campaign exploiting knowledge of cognitive biases to produce a desirable behavior).
comment by gjm · 2009-06-15T21:31:59.799Z · LW(p) · GW(p)
Nick Bostrom proposed that we should work hard looking for ways to enhance the cognition of research scientists, because even a small increase in the average intelligence of research scientists would increase research output by a large amount, because there are lots of scientists.
I wonder about this. Isn't it the case [translation: I'm sure I read in some general-audience psychology book once] that for just about every human activity, scientific research included, there's a certain level above which differences in intelligence, at least in the sense of what intelligence tests measure, seem to have very little correlation with differences in effectiveness?
It wouldn't surprise me at all if scientific research could be benefited much more by, say, making research scientists more energetic, or stronger-willed, or keener on hard work, or able to get by with less sleep.
(Of course this is a bit of a digression, since Roko is suggesting intelligence enhancement for a very different population. I think it might have more value there, if there were actually a feasible way to do it.)
Replies from: Arenamontanus, Roko↑ comment by Arenamontanus · 2009-06-16T17:43:03.465Z · LW(p) · GW(p)
There are many traits that would be useful for research and other fields, such as energy, better time management, social ability etc. Intelligence is important for problem-solving in domains where standard rules have not been defined, which might be particularly true in some reasearch. However, it is hard to measure the impact of such ability directly.
David Lubinski and Camilla Persson Benbow, Study of Mathematically Precocious Youth After 35 Years, Perspectives on Psychological Science, 1,316-343 www.vanderbilt.edu/Peabody/SMPY/DoingPsychScience2006.pdf has some intriguing data. They followed up the top percent scorers and compared the uppermost and lowest quartile of this already elite group. Unsurprisingly they were on average doing great, and the top group also earned more and had about six times the rate of tenure at top US universities. But that could just be pure competitive ability rather than any individually or socially useful outcome. The interesting result was that the number of doctorates and percent earning patents was about twice in the top quartile. Doctorates and patents are after all a form of measure of actually having achieved something, and presumably a society is better off if bright people produce more patentable ideas. This IMHO strengthens the idea that we would see gains from cognition enhancement even among the brightest.
However, I think the biggest economic and social impact will be due to intelligence among the great mass of people - reduction of costs and friction due to stupidity, short term thinking, mistakes and other limitations, increased benefits from better cooperation (smart people do better on iterated prisoners dilemma games and have longer time horizons) and ability to manage more complex systems.
A "emotional intelligence enhancer" might be socially beneficial too - there is no reason to think "pure" cognitive function is the end of things we might rationally want to see others enhance.
↑ comment by Roko · 2009-06-15T22:06:05.139Z · LW(p) · GW(p)
If you think increased IQ doesn't lead to better research, just ask a few Nobel prize winners what their IQ is.
A gold star will be awarded to anyone who finds some data on Nobel prize winners and IQ.
Replies from: gjm, Annoyance↑ comment by gjm · 2009-06-15T22:21:18.683Z · LW(p) · GW(p)
What I said is not "increased IQ doesn't lead to better research" but that perhaps among people already smart enough to be any good at scientific research increased IQ might make little difference.
Even if that's true, though, there might be substantial benefit in increasing IQ among people who would like to do scientific research but aren't quite good enough at it for it to be a sensible career. But that's not what AIUI Nick Bostrom was suggesting.
Replies from: Roko, HughRistik↑ comment by Roko · 2009-06-15T23:45:24.385Z · LW(p) · GW(p)
among people already smart enough to be any good at scientific research increased IQ might make little difference.
I doubt specifically this statement. I think that research outcomes in many areas are probably super linear in IQ, i.e. going from 140 to 150 could make the difference between so-so research and groundbreaking research. Consider whether Bostrom would have founded the FHI if his IQ had been 10 points lower.
↑ comment by HughRistik · 2009-06-15T23:51:08.849Z · LW(p) · GW(p)
The relationship of IQ to scientific achievement might be a step function.
I wonder about this. Isn't it the case [translation: I'm sure I read in some general-audience psychology book once] that for just about every human activity, scientific research included, there's a certain level above which differences in intelligence, at least in the sense of what intelligence tests measure, seem to have very little correlation with differences in effectiveness?
I am curious about how this was measured.
Replies from: MichaelBishop, Roko↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T00:30:32.869Z · LW(p) · GW(p)
What sort of mechanism would produce a step function? Sounds highly unlikely to me.
Added: I would expect the curve to be smooth.
Replies from: Roko, HughRistik↑ comment by Roko · 2009-06-16T00:47:51.864Z · LW(p) · GW(p)
The mechanism is likely to be that a smarter researcher sees solutions intuitively, whereas a dumber one has to try lots of things that don't work before getting to the correct solution; this would produce super linear speedup I think, because as you get smarter you avoid more and more wasted effort. There's also the issue of status producing more motivation, which produces more achievement, which produces more status. This adds a significant nonlinearity.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T01:24:23.965Z · LW(p) · GW(p)
Miscommunication. My point was only that I expect the function that describes the relationship to be a smooth curve. I wouldn't be too surprised if the relationship between IQ and research productivity is stronger at the high end than in the middle.
↑ comment by HughRistik · 2009-06-16T01:38:38.822Z · LW(p) · GW(p)
Sounds unlikely to me too, but it could explain the phenomenon underlying glm's quote (that above a certain threshold intelligence doesn't make much of a difference in "effectiveness"), assuming that the result is valid (which I would want to know how "effectiveness" was measured).
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T16:10:47.716Z · LW(p) · GW(p)
Your question about how they measured "effectiveness" is right on.
My guess is that marginal benefits to IQ depend on the task, and the IQ range. For tasks of medium difficulty, the marginal benefits of IQ will probably increase as one goes from the low-IQ to average, flatten out and then decrease as one gets to a very high range. But higher IQ allows you to efficiently attempt more much more difficult (and arguably important) tasks.
↑ comment by Roko · 2009-06-15T23:55:18.079Z · LW(p) · GW(p)
The relationship of IQ to scientific achievement might be a step function.
My guess is that it is superlinear. Look at, e.g. Von Neumann.
Replies from: Arenamontanus↑ comment by Arenamontanus · 2009-06-16T17:51:28.977Z · LW(p) · GW(p)
Human ability generally seems to be power-law distributed - the "80-20" rule often hold in research. I'm just checking out Murray's "Human Accomplishment", and this is the impression I get from his data - whether it is valid data remains to be seen. But this might have many other causes, from Matthew effects where widely cited people become even more cited (and maybe get great research environments) to multiplicator effects where productivity is due to multiplicative effects of more or less random factors - only a few gets a lot of them, and the result is a lognormal distribution.
IQ, as ability to make rational inferences in new domains, may be just one of these factors. Low IQ certainly precludes much scientific achievement. There are also selection effects where getting into the right schools or professions require overcoming IQ-loaded hurdles. The real benefits of IQ among geniuses might be smaller than other factors - but having more people with high IQ will certainly not decrease the pool of potential geniuses.
↑ comment by Annoyance · 2009-06-16T18:56:08.167Z · LW(p) · GW(p)
Logical fallacy: those Nobel prize winners do not have increased IQ. Presumably they have high IQ.
If Nobel prize winners all have very high IQs, that tells us that high IQ is a necessary - but not necessarily sufficient - requirement for winning Nobel prizes. And that itself tells us little about what's needed for quality research, even presuming that all Nobels are awarded for quality research. (I happen to know that they aren't, but that's another story.)
What are the Type I and Type II error rates of the Nobel prize award process?
Replies from: MichaelBishop, MichaelBishop, MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T19:10:27.272Z · LW(p) · GW(p)
What are the Type I and Type II error rates of the Nobel prize award process?
IMO, the more important question is whether the overall system of incentives for scientists is effective.
↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T22:57:54.843Z · LW(p) · GW(p)
Imagine we invented a pill which increased everyone's performance on IQ tests by one standard deviation with no side effects (note, I don't expect to see this soon). Further, imagine that all current scientists began taking it. What benefits would you expect to see?
Let me be more specific, assume no funding changes, even though smarter scientists would almost certainly get more funding: how much would Science and Nature have to expand if they did not raise the bar for publication? My estimate: 20% with a 95% confidence interval of [3%, 100%]
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T19:17:02.112Z · LW(p) · GW(p)
What benefits would you expect to see?
None, actually.
I expect the foolish would come up with even cleverer ways of deluding themselves than before, which would make it even harder for them to distinguish truth from their own cherished beliefs.
The wise, who already possessed the ability to override their primitive associational thinking, would have a better ability to grasp complex theories and work through their implications. But they would be vastly outnumbered by the fools - thus, no overall improvement and a possible overall harm.
Not everyone who is socially recognized as a 'scientist' can actually put the principles of science into practice.
Replies from: MichaelBishop, Roko↑ comment by Mike Bishop (MichaelBishop) · 2009-06-17T20:24:18.554Z · LW(p) · GW(p)
Let me get this straight, you believe that a majority of scientists would do worse science if they had a higher IQ? I doubt many scientists would agree. Do you agree that you are among a small minority holding this position?
Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects?
I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications?
Edited to fix a typo
Replies from: Annoyance, JGWeissman↑ comment by Annoyance · 2009-06-18T14:00:07.596Z · LW(p) · GW(p)
Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects?
No.
I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications?
Considered across all individuals? Only a very weak one. I suggest limiting the question to scientists. In that case, the answer for 'quantity' would be "not strongly at all", and 'quality' is so difficult to define as to be useless for this investigation.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T14:48:32.461Z · LW(p) · GW(p)
You claim higher IQ would hurt most scientists, but a lower IQ would not help. This implies a majority of scientists have the ideal IQ for furthering science. To me this sounds like an impossible coincidence.
I might look for research on what predicts a scientists research productivity. GRE scores may be more common than IQ. Can we make terms for a bet? I claim that net of all controls, GRE or IQ scores will having nontrivial positive relationship with research productivity.
"Quality" is difficult to measure, but you give up too quickly. e.g. citations, impact factor of journal of publication
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T14:58:04.590Z · LW(p) · GW(p)
I need to clarify. Quite a lot of 'scientists' are terrible at putting the scientific method into practice. I try to exclude those people from the category whenever possible. I do acknowledge, though, that this will frequently lead to confusion.
A lower IQ of scientists overall would make progress slower, but generally wouldn't impede the self-correcting properties of the method.
The so-called scientists who don't or can't put the method into practice would have their ability to make clever but specious arguments impaired. Possibly the reduced nonsense-sensing of the scientists would still be more than enough to identify and exclude the reduced levels of nonsense.
With reduced IQ across scientists and 'scientists' both, it's entirely possible that there would be more scientific progress for the field as a whole. There are a number of necessary but not sufficient factors involved, and non-lethal but cumulatively-damaging factors as well. It's not obvious to me that the properties measured by IQ are equally distributed across the positive and negative factors; I suspect they lend themselves to the negative more than the positive.
↑ comment by JGWeissman · 2009-06-17T21:45:25.515Z · LW(p) · GW(p)
I agree with most of your comment, but "Do you agree that you (are) among a small minority holding this position?" is social pressure in place of a real argument. The truth is not determined by voting.
Replies from: MichaelBishop, Vladimir_Nesov↑ comment by Mike Bishop (MichaelBishop) · 2009-06-17T22:14:30.800Z · LW(p) · GW(p)
The truth is not determined by voting, but the truth is often positively correlated with peoples' opinions. It is rational to weigh other people's opinions. If I disagree with someone, I must ask myself why I am more likely to be correct than they are.
Replies from: JGWeissman, Vladimir_Nesov, Annoyance↑ comment by JGWeissman · 2009-06-17T22:43:54.448Z · LW(p) · GW(p)
Annoyance had already explained his reasons for his position, and you explained reasons for yours in the rest of your comment. Once we are discussing those reasons directly, there is no need to use majority opinion as a proxy for the relative strength of those reasons.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T00:46:06.902Z · LW(p) · GW(p)
I disagree. People's opinions are evidence and deserve weight. Smarter, more rational, people's opinions deserve more weight. The opinions of scientists who specialize in a relevant subject deserve still more weight. Why shouldn't we consider this type of evidence?
Replies from: JGWeissman↑ comment by JGWeissman · 2009-06-18T01:23:06.245Z · LW(p) · GW(p)
First, I should point out that what I initially objected to was an appeal to an assertion of raw majority, with no weighting based on rationality, intelligence, or specialization, and in which uninformed opinions are likely to drown out evidence-informed expert opinions.
Second, the reason the opinion of a specialist can be strong evidence is that the specialist is likely to have access to evidence not generally available or known, and have superior ability to process that evidence. So, when someone discovers that a specialist disagrees with them, they should seek to learn the evidence and arguments that informed the specialist's opinions, and then evaluate them on their merits. Ideally, at this point, the specialist's opinion is no longer evidence, as the fundamental evidence it represents is already accounted for. As a practical matter, the specialists opinion still counts to the extent that a person is uncertain that they have learned of all the specialist's evidence and understood all the arguments. You have not argued that this uncertainty is and will likely remain significant in this case. Rather, it seemed that you were trying to dismiss an idea because it is unpopular.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T02:27:26.205Z · LW(p) · GW(p)
I think our disagreement is relatively small. a few remaining points:
People don't have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don't have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about.
Sometimes we mainly care about what the answer is, not why.
I don't always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
↑ comment by JGWeissman · 2009-06-18T05:32:50.011Z · LW(p) · GW(p)
*Sometimes we mainly care about what the answer is, not why.
*I don't always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
In these cases, where you don't care, or can't be bothered to explain, the reasons for a position, it seems you lack either the time or the interest to seriously debate the issue.
People don't have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don't have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about.
This can be a valid point when you have to make policy decisions about complicated issues, but it does not apply to your appeal to the majority that I objected to.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T06:41:03.785Z · LW(p) · GW(p)
I didn't just appeal to the majority, I mentioned scientists' opinions explicitly in the sentence previous to the one you are objecting to.
You've argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I'm missing something you haven't named a single cost. I'm sure there are some, but I'll let you name them if you choose.
I'm starting to think that you primarily objected to the tone of my language. You don't really want to stop people from discussing what other people believe.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-06-18T18:06:52.501Z · LW(p) · GW(p)
I didn't just appeal to the majority, I mentioned scientists' opinions explicitly in the sentence previous to the one you are objecting to.
You mentioned scientists' opinions not about the subjects that they study, but about the impact of intelligence on the quality of their work, which they are not likely to know more about than anyone else. If you had mentioned the opinions of psychologists who had studied the effects of intelligence on scientific productivity, that would be the sort of support you are claiming. Further, you weren't even talking about a survey of scientists' opinions, or other evidence about what they think; you just asserted what you think they think. Now, you could make the argument that the scientists would think that for the same reasons you do, or because you believe it is really true and they would notice, but in this case your beliefs about their opinions is not additional evidence.
You've argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I'm missing something you haven't named a single cost. I'm sure there are some, but I'll let you name them if you choose.
Well, I suppose I have not explicitly stated it, but the primary cost is that it displaces discussion of the more fundamental evidence about the issue that is supposedly informing the majority or expert opinion.
And you yourself argued elsewhere that in the political process of voting that attempts to aggregate opinions, "Voters are often uninformed about how policy affects their lives".
Even with expert opinions, it can be hard to understand what the expert thinks. I have seen people go horribly wrong by applying an expert's idea out of context. If you don't understand an expert's reasoning because it is too complicated, you probably don't understand their position well enough to generalize it.
You don't really want to stop people from discussing what other people believe.
What I object to is using a discussion of what other people believe to shut down discussion of an opposing belief.
↑ comment by Vladimir_Nesov · 2009-06-18T08:58:01.925Z · LW(p) · GW(p)
This is a stronger modesty argument, as distinct from simply taking the majority opinion as one of the pieces of evidence for arriving at your own conclusion.
↑ comment by Annoyance · 2009-06-18T14:01:50.182Z · LW(p) · GW(p)
It is rational to weigh other people's opinions.
Logical fallacy: stating a contingent proposition as a universal principle.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T14:26:12.081Z · LW(p) · GW(p)
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T14:48:22.573Z · LW(p) · GW(p)
Sometimes the conversation shouldn't be permitted to continue.
Are we looking to facilitate social interaction, or use rational argument to discover truth? The two are often, even usually, incompatible.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-18T14:50:55.821Z · LW(p) · GW(p)
An interesting claim, please explain why you believe this to be true?
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T15:02:41.146Z · LW(p) · GW(p)
The two are compatible only when the preferred social feedback standards match the standards of rational thought. All other social standards necessarily come into conflict. Thus, all else being equal, a randomly-chosen standard is quite unlikely to be compatible with rationality.
In actual groups, the standards aren't chosen randomly. But humans being what they are, they usually involve primate social dynamics and associational reasoning, neither of which lend themselves to the search for truth. Generally they involve social/political 'games' and power struggles.
↑ comment by Vladimir_Nesov · 2009-06-18T08:36:47.629Z · LW(p) · GW(p)
Consensus is valid evidence (but not the only evidence).
Replies from: Annoyance↑ comment by Roko · 2009-06-17T21:19:28.854Z · LW(p) · GW(p)
This argument is vulnerable to the reversal test. For lay people and scientists alike.
Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased.
It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
Actually, now that I review this comment, I would replace this with "it is reasonable to expect that increasing intelligence will, to some extent, affect our in-built self-deception, but it may be either negative or positive", we should look at evidence to see what actually happens.
Replies from: CronoDAS, Annoyance, pjeby↑ comment by CronoDAS · 2009-06-17T21:55:29.041Z · LW(p) · GW(p)
It could also disrupt them in the wrong direction; there's no particular reason to assume that becoming "smarter" won't just make us better self-deceivers.
As Michael Shermer writes, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."
Replies from: Roko↑ comment by Roko · 2009-06-17T22:10:53.836Z · LW(p) · GW(p)
This is plausible in the individual case, but in a large group of people, each with randomly chosen cherished falsehoods, I claim that increasing the average intelligence parameter will increase the degree to which the group as a whole has true beliefs.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T13:57:19.975Z · LW(p) · GW(p)
Cherished falsehoods are unlikely to be random. In groups that aren't artificially selected at random from the entirety of humanity, error will tend to be correlated with others'.
There are also deep flaws in humanity as a whole, most especially on some issues.
Should we decide to believe in ghosts because most human beings share that belief, or should we rely on rational analysis and the accumulation of evidence (data derived directly from the phenomena in question, not other people's opinions)?
↑ comment by Annoyance · 2009-06-18T13:54:03.731Z · LW(p) · GW(p)
It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
No. Your argument is specious. Evolution 'designed' us with all sorts of things 'in mind' that no longer apply. That doesn't mean that any arbitrary aspect of our lives will have an influence if it's changed on any other aspect. If the environmental factors / traits have no relationship with the trait we're interested in, we have no initial reason to think that changing the conditions will affect the trait.
Consider the absurdity of taking your argumentative structure seriously:
"Nature designed us to have full heads of hair. Nature also gave us a sense of sight, which it did not design to operate optimally in hairless conditions. It is therefore reasonable to expect that shaving the head will, to some extent, disrupt our visual acuity."
Replies from: Roko↑ comment by Roko · 2009-06-18T14:03:04.062Z · LW(p) · GW(p)
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.
But we have already established that intelligence is likely to affect our ability to self-deceive.
For example, we could fairly easily establish that inhaling large quantities of soot is likely to affect our lungs in some way, then apply this argument to get the conclusion that pollution is probably slightly harmful (with some small degree of certainty).
Essentially this argument says: if you perform a random intervention J that you have reason to believe will affect evolved system S, it will probably reduce the functioning of S, unless J was specifically designed to improve the functioning of S.
Stated like this I don't find this style of argument unsound; smoking, pollution, obesity, etc are all cases in point.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T14:05:57.744Z · LW(p) · GW(p)
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering.
No, the criticism is valid if we have no reason to think that the traits will be causally linked. You're making another logical fallacy - confusing two statements whose logical structure renders them non-equivalent.
(thinking trait is ~relevant) != ~(thinking trait is relevant)
Replies from: Roko↑ comment by pjeby · 2009-06-17T21:34:59.771Z · LW(p) · GW(p)
Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased. It is therefore reasonable to expect that increasing intelligence will, to some extent, disrupt our in-built self-deception.
Not if the original function of (verbal) "intelligence" was to improve our ability to deceive... and I strongly suspect this to be the case. After all, it doesn't take a ton of (verbal) intelligence to hunt and gather.
Replies from: Roko↑ comment by Roko · 2009-06-17T21:48:52.655Z · LW(p) · GW(p)
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies. It is highly plausible that increasing intelligence will increase both of these functions.
Replies from: pjeby, Annoyance↑ comment by pjeby · 2009-06-17T22:52:25.493Z · LW(p) · GW(p)
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies.
Good point. Of course, that mechanism is for detecting other people's lies, and there is some evidence that it's specific to ideas and/or people you already disagree with or are suspicious of... meaning that increased intelligence doesn't necessarily relate.
One of the central themes in the book I'm working on is that brains are much better at convincing themselves they've thought things through, when in actuality no real thinking has taken place at all.
Looking for problems with something you already believe is a good example of that: nobody does it until they have a good enough reason to actually think it through, as opposed to assuming they already did it, or not even noticing what it is they believe in the first place.
↑ comment by Annoyance · 2009-06-18T13:58:27.562Z · LW(p) · GW(p)
"Lying" and "being wrong" are not the same. Lying is intentionally communicating a non-truth with the intent to deceive.
And intelligence doesn't necessarily have anything to do with our capacity to detect lies. You're simply assuming your conclusion in a different form. Again.
Replies from: asciilifeform, Roko↑ comment by asciilifeform · 2009-06-18T14:32:43.155Z · LW(p) · GW(p)
intelligence doesn't necessarily have anything to do with our capacity to detect lies
Do you actually believe this?
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T14:46:44.459Z · LW(p) · GW(p)
Yep.
Higher intelligence implies a greater capacity to work out the logical consequences of assertions and thus potentially detect inconsistencies between two assertions or an assertion and an action.
It doesn't imply that people will have the drive to look for such contradictions, or that such a detected contradiction will be interpreted properly, nor does it imply that it will be useful at detecting lies without logical contradictions.
↑ comment by Roko · 2009-06-18T14:19:38.502Z · LW(p) · GW(p)
It seems highly reasonable that it is true that people who are able to get higher scores on IQ tests are both harder to fool and are, on any given question, more likely to believe the correct answer (this second claim is supported by the correlation between IQ and school exams). If you claim to doubt this, I think you're just being deliberately awkward.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-18T14:22:34.564Z · LW(p) · GW(p)
I suggest you read more Feynman, then. Or James Randi.
School exams, particularly in our country, measure the ability to memorize and retrieve information presented formally. They have no obvious relationship to the ability to evaluate the validity of arguments or derive truth.
Replies from: Roko↑ comment by Roko · 2009-06-18T14:37:44.400Z · LW(p) · GW(p)
They have no obvious relationship to the ability to evaluate the validity of arguments or derive truth.
I suspect that you are going too far in expecting a someone who can get 140 on an IQ test to, on average, be just as easy to fool into believing some abstract falsehood as someone who got 60 on that same IQ test. By the way, what's your IQ?
Replies from: Annoyance↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T19:06:33.755Z · LW(p) · GW(p)
Roko was arguing somewhat casually but I don't think he is actually reasoning casually. Its fine to discourage this type of comment with a downvote, but starting your reply with the words "Logical fallacy" is unnecessarily harsh in my opinion.
Replies from: thomblake↑ comment by thomblake · 2009-06-16T21:06:24.268Z · LW(p) · GW(p)
Roko's comment seems to contain a logical fallacy. While there might be a reason to make the distinction between the reasoning going on in Roko's argument and the reasoning going on in Roko's head, I have no access to the latter and so must evaluate the former. I don't see what's wrong with Annoyance pointing that out, and calling a fallacious argument fallacious is hardly 'harsh'; at least, it's no harsher than is called for.
Replies from: Annoyance, MichaelBishop↑ comment by Annoyance · 2009-06-17T19:18:33.085Z · LW(p) · GW(p)
While there might be a reason to make the distinction between the reasoning going on in Roko's argument and the reasoning going on in Roko's head, I have no access to the latter and so must evaluate the former.
Even if you had access to the latter, that has no bearing on your evaluation of the former. It's the explicit claims that we're looking at, the ones that are actually communicated, not what the person meant inside their head or what we think they might mean.
↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T22:24:18.817Z · LW(p) · GW(p)
I encourage efforts to maintain high standards of reasoning, and fairly explicit reasoning. In evaluating harshness, we need to strike a balance between at least three goals: 1. clarity of thought, 2. creating proper incentives for quality contributions which requires punishing mistakes / undesirable contributions, and 3. creating a friendly and respectful atmosphere.
For the record, calling Annoyance's comment, "unnecessarily harsh" was meant to be a minor criticism. There are many factors to consider, in this case I would have suggested that Annoyance replace "Logical fallacy" with "Nitpick." Also see my new comment for Annoyance.
comment by CronoDAS · 2009-06-16T00:14:42.159Z · LW(p) · GW(p)
Do you think you might be underestimating the capabilities of the statistically average person of 100 IQ?
Now, if the average voter could understand the concept of photosynthetic efficiency, and could understand a simple numerical calculation showing how inefficient corn is at converting solar energy to stored energy in ethanol, this policy choice would have been dead in the water.
There's an obvious point you're overlooking here.
Plants are, indeed, only about 3% efficient at converting the energy in sunlight into chemical energy, and that's before the living plant is harvested. However, bare ground is zero percent efficient, and the sunlight is there whether we use it or not.
Replies from: PhilGoetz, Roko, Annoyance↑ comment by PhilGoetz · 2009-06-16T18:18:57.636Z · LW(p) · GW(p)
There are many studies - 1 to 2 dozen - showing that producing a gallon of corn ethanol takes more energy than is contained in a gallon of corn ethanol. Usually, the conclusion is that it takes n=1 to 1.4 times as much energy. There are other studies claiming the opposite; they fail to take into account factors such as irrigation and transportation costs.
I wrote to Wired magazine and to somewhere else (I forget where) to correct their outrageously-incorrect assertions about corn ethanol. (Wired underreported n by 2 orders of magnitude, which is disturbing because this wasn't like ordinary irresponsible journalism where someone took one "expert's" numbers uncritically. The figure they gave for corn ethanol efficiency was AFAIK much, much higher than those of even the most biased ethanol advocates.) My responses were unpublished. It's even worse journalism when you make an extreme error on a point important to public policy, and then someone points it out to you, and gives you a dozen literature citations, and you don't correct it.
Replies from: Roko, Roko, Annoyance↑ comment by Roko · 2009-06-16T23:12:56.114Z · LW(p) · GW(p)
Also, for the less wrong pedant community, phil meant that it takes 1 to 1.4 times more energy input where "energy input" EXCLUDES the solar energy that the corn plants absorb to produce 1 litre of corn ethanol than is contained in 1 litre of corn ethanol.
↑ comment by Annoyance · 2009-06-16T18:48:45.942Z · LW(p) · GW(p)
There are many studies - 1 to 2 dozen - showing that producing a gallon of corn ethanol takes more energy than is contained in a gallon of corn ethanol.
Well, of course. And it takes more energy to produce a gallon of oil, or an equivalent amount of coal or what have you, than is contained in that amount of fuel.
Anyone who knows the laws of thermodynamics will tell you that.
I think you may have a valid point somewhere, but it's not being expressed properly.
Replies from: Alicorn, timtyler, Vladimir_Nesov↑ comment by Alicorn · 2009-06-16T19:06:56.616Z · LW(p) · GW(p)
We have to actually invest the energy that goes into producing ethanol. Coal and oil were produced without such intervention and all we need to do is dig them up. If people were talking about synthesizing these things, that would be a more sensible comparison to make.
Replies from: thomblake↑ comment by thomblake · 2009-06-16T21:01:44.320Z · LW(p) · GW(p)
I believe Annoyance's point was that it takes more energy to create virtually any fuel than you get out of it, and if PhilGoetz meant anything further than that, he should have said so. I also did not feel informed after reading PhilGoetz's comment, for the same reason.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-06-16T22:28:52.732Z · LW(p) · GW(p)
The core distinction here is between energy production and energy storage, and confusing the two is the sort of thing Roko was complaining about (I am not saying anyone here is confusing them).
Ethanol is a valid means of storing energy, although producing it from corn is a terribly suboptimal way of doing so anyway.
Ethanol is not in any useful way a means of producing energy, but it is often presented as if it was, and the inability of the general public to understand this is the heart of the matter.
Fossil fuels, like other non-renewable energy sources, are energy that was stored in advance by natural processes, and are only useful for energy "production" because the energy cost of extracting them is far lower than the energy they store. Corn ethanol is just a very silly way to store solar power.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T13:26:35.274Z · LW(p) · GW(p)
Almost completely correct. The only quibble I have is with the claim that "Ethanol is not in any useful way a means of producing energy".
That's not quite true. It would be more accurate to say that trying to use corn as an industrial energy source while simultaneously growing it with methods that require artificial fertilizers (which are extremely energy-intensive to synthesize) and mechanical tillage and harvesting (which requires amounts of industrial-level fuel that are prohibitive for that task) is utterly pointless, because the total process requires more industrial-level fuel than it produces.
You can get more energy out of the corn than you invest into it - obviously. But you can't do so with modern industrial agricultural methods, which expend lots of energy (considered in total, including fertilizer manufacture) to capture a relatively small amount of solar energy in a form that people and animals can consume.
Even when we used draft animals to do the labor required and relied on organic fertilizers only, farmers couldn't make their planted crop areas provide all of the energy and resources necessary to keep the system going. Large areas of forage (usually grass) were needed to feed the beasts so that human-edible crops could be produced - an 'external' energy input for the crop areas. The farms as a whole were powered by the sun only, of course.
When humans do all of the work of agriculture, farming is far, far less efficient (depending on the methods used) and with very primitive methods has a return barely greater than the investment of (human-provided) energy.
↑ comment by timtyler · 2009-06-17T08:59:12.338Z · LW(p) · GW(p)
I don't think you understood Phil's comment. Thermodynamics does not dictate it takes more energy to produce some fuel than is contained in the fuel. Producing fuel is energetically inexpensive - if you have sufficiently concentrated precursors.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T13:12:56.708Z · LW(p) · GW(p)
Thermodynamics does not dictate it takes more energy to produce some fuel than is contained in the fuel.
Yes, it does. Second law. You need all of the energy that the fuel will store, plus more to run the process that creates it.
Either you're producing lower-energy fuel from higher-energy fuel, or you're taking base constituents and available energy and synthesizing a higher-energy configuration.
Replies from: Vladimir_Nesov, timtyler↑ comment by Vladimir_Nesov · 2009-06-17T15:22:18.939Z · LW(p) · GW(p)
Yes, it does. Second law. You need all of the energy that the fuel will store, plus more to run the process that creates it.
It's conservation of energy, not second law.
Either you're producing lower-energy fuel from higher-energy fuel, or you're taking base constituents and available energy and synthesizing a higher-energy configuration.
High-energy fuel is simply fuel that allows to restore more energy per unit of weight. Take more low-energy fuel and convert it into less high-energy fuel.
↑ comment by timtyler · 2009-06-17T21:40:31.446Z · LW(p) · GW(p)
I still don't think you understood Phil's comment.
Presumably, you won't be able to make sense of this either:
"In addition, production of ethanol is energy efficient, in that it yields nearly 25 percent more energy than is used in growing the corn, harvesting it, and distilling it into ethanol."
The problem appears to be that you are incorrectly imagining that the other people in the discussion are trying to account for factors such as sunlight.
↑ comment by Vladimir_Nesov · 2009-06-16T19:39:07.174Z · LW(p) · GW(p)
Anyone who knows the laws of thermodynamics will tell you that.
Termodynamics has nothing to do with this. There is the sun outside this open system.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T13:16:30.748Z · LW(p) · GW(p)
Termodynamics has nothing to do with this.
Thermodynamics has everything to do with the statement in question.
There is the sun outside this open system.
Exactly.
What I suspect Phil was trying to express was that ethanol manufacturing requires us to expend more of the desired level of fuel than we derive from the process - which is a good point - and that this rules out ethanol as a viable energy source - which is NOT a good point.
We invest more energy in building and charging batteries than we can get out of them. That isn't an argument against batteries, because they're a means of transmitting energy in usable form. And extremely useful ones. Carrying around a steam-powered generator to operate a flashlight isn't an option.
Corn ethanol is a terrible net-producer of industrial-grade fuel because its production consumes more of that level of fuel than it produces, NOT because it "takes more energy to make it than it provides", which is trivially, obviously true of any fuel.
↑ comment by Roko · 2009-06-16T00:29:20.237Z · LW(p) · GW(p)
The crux of the argument is that it costs energy to harvest the corn and process it. When you look at the numbers you see that it just doesn't add up.
Furthermore, if you compare corn ethanol to solar power you see why the low conversion efficiency is so damning, especially when you do the math for how much land you'd have to cover with corn to serve US energy needs. Going off the top of my head, this area is greater than the whole of the US.
Replies from: MichaelBishop, CronoDAS↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T00:58:45.909Z · LW(p) · GW(p)
If people listened to intelligent and careful thinkers they wouldn't need to understand it themselves. Whether this is an easier or harder route is unclear to me.
Replies from: CronoDAS, komponisto↑ comment by CronoDAS · 2009-06-16T05:08:18.750Z · LW(p) · GW(p)
The problem is that, in general, there's no good way for a layman to tell the difference between Carl Sagan and Immanuel Velikovsky, except by comparing them to other people who claim to be experts in a field. A book of internally consistent lies, such as Chariots of the Gods? will seem as plausible as any book written about real history to someone who doesn't already know that it's a book of lies.
Replies from: MichaelBishop, Annoyance↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T15:52:08.468Z · LW(p) · GW(p)
...there's no good way for a layman to tell the difference between Carl Sagan and Immanuel Velikovsky, except by comparing them to other people who claim to be experts in a field.
That sounds like a promising strategy to me. At least it is far better than what people currently do, which is adopt what their friends think, or ideas they find appealing for other reasons. No doubt it would be better if more people were capable of evaluating scientific theory and evidence themselves, but imagine how much better things would be if people simply asked themselves, "Which is the relevant community of experts, how are opinions on this issue distributed amongst the experts, how reliable have similar experts been in the past? e.g. chemists are generally less wrong about chemistry than psychologists are about psychology. This would be a step in the right direction.
↑ comment by Annoyance · 2009-06-16T18:50:39.504Z · LW(p) · GW(p)
That's not quite true. There are ways of evaluating an expert - but people don't like them, don't implement them, and don't try to find out what they are.
Many, many people who have the social status and authority of experts simply don't know what they're talking about. They can be detected by an earnest and diligent inquiry, combined with a healthy and balanced skepticism.
Doctors are a prime example.
Replies from: CronoDAS↑ comment by CronoDAS · 2009-06-17T21:18:07.118Z · LW(p) · GW(p)
Unfortunately, many of those ways are equivalent to "become an expert yourself". :(
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-06-17T21:26:21.666Z · LW(p) · GW(p)
But how do you know when you've become an expert?
Turtles all the way down!
↑ comment by komponisto · 2009-06-16T03:32:50.250Z · LW(p) · GW(p)
Indeed. Now that I think about it, perhaps the real problem here is that the marginal social status payoff from an increase in IQ is too low (perhaps even negative in some cases); in other words, IQ doesn't buy one enough status. So the question is whether it is easier to fix this than just to raise the IQ baseline.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T15:59:42.654Z · LW(p) · GW(p)
How does increasing "the marginal social status payoff from an increase in IQ" help? I'm not saying it would hurt, but it seems less direct and less important than increasing the marginal social status payoff from having and acting on unbiased beliefs about the world because this is something people can change fairly easily.
Replies from: asciilifeform, komponisto↑ comment by asciilifeform · 2009-06-16T16:01:05.507Z · LW(p) · GW(p)
How does increasing "the marginal social status payoff from an increase in IQ" help?
The implication may be that persons with high IQ are often prevented from putting it to a meaningful use due to the way societies are structured: a statement I agree with.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T16:20:59.775Z · LW(p) · GW(p)
persons with high IQ are often prevented from putting it to a meaningful use due to the way societies are structured.
Do you mean that organizations aren't very good at selecting the best person for each job. I agree with that statement, but its about much, much, more than IQ. It is a tough nut to crack but I have given some thought to how we could improve honest signaling of people's skills.
Replies from: asciilifeform↑ comment by asciilifeform · 2009-06-16T16:50:37.060Z · LW(p) · GW(p)
Do you mean that organizations aren't very good at selecting the best person for each job.
Actually, no. What I mean is that human society isn't very good at realizing that it would be in its best interest to assign as many high-IQ persons as possible the job of "being themselves" full-time and freely developing their ideas - without having to justify their short-term benefit.
Hell, forget "as many as possible", we don't even have a Bell Labs any more.
Replies from: komponisto, MichaelBishop↑ comment by komponisto · 2009-06-17T00:43:13.508Z · LW(p) · GW(p)
This, I think, is a special case of what I meant. A simple, crude, way to put the general point is that people don't defer enough to those who are smarter. If they did, smart folks would be held in higher esteem by society, and indeed would consequently have greater autonomy.
↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T17:53:07.255Z · LW(p) · GW(p)
How should society implement this? I repeat my claim that other personal characteristics are as important as IQ.
Replies from: asciilifeform↑ comment by asciilifeform · 2009-06-16T18:00:25.709Z · LW(p) · GW(p)
I do not know of a working society-wide solution. Establishing research institutes in the tradition of Bell Labs would be a good start, though.
↑ comment by komponisto · 2009-06-17T00:45:25.435Z · LW(p) · GW(p)
That may well be right. I'm willing to accept that the distinction between "I.Q." and other measures of "smartness" is orthogonal to the point I was making.
↑ comment by CronoDAS · 2009-06-16T05:07:26.783Z · LW(p) · GW(p)
You're absolutely right about corn ethanol not being much of a solution - indeed, you can't power the U.S. on just corn ethanol, but burning corn-derived ethanol does provide a net gain in useful energy. It's just not nearly enough energy to make a difference. Finally, the biggest problem is that there are generally better things to do with grown corn than to turn it into fuel for engines, such as turn it into food for humans or other animals...
↑ comment by Annoyance · 2009-06-16T18:46:50.803Z · LW(p) · GW(p)
Plants are also much better at converting sunlight into chemical energy than any system we can build.
But the issue isn't how well they store energy, but at how efficiently we can use the energy they store. You can't efficiently fuel an electricity-generating plant with corn - trying to use plant energy to power our civilization is hopeless.
Replies from: timtyler↑ comment by timtyler · 2009-06-17T09:05:02.570Z · LW(p) · GW(p)
That is totally incorrect. Plants are 1-2%. Good panels are around 20% - with experimental ones well beyond that. That's because most solar energy occurs at wavelengths unsuitable for photosynthesis.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-17T13:35:42.674Z · LW(p) · GW(p)
Good panels are only that good under laboratory conditions, and require massive expenditures of energy to construct in the first place. Plants are self-replicating.
Equally as important, they produce chemical energy directly. Without an efficient way to produce and store hydrogen using electrical power, there's no alternative for chemical fuels.
Replies from: Alicorn, Roko, timtyler↑ comment by Alicorn · 2009-06-17T17:13:31.376Z · LW(p) · GW(p)
"Plants are self-replicating"? In theory, will corn grow without our help? Sure! In practice? Not if you want it in neat, harvestable rows; not if you don't want it to compete with weeds; not if you want it to have a high per-acre yield; not if you want to control which seeds get to turn into plants next generation; not if you don't want crows to eat it; not if you want it to stick to your property and not take over the neighbor's alfalfa; and not if you take all of the plant's kernels and turn them into car fuel.
We don't settle for the replication rate of wild plants, so it's just not the case that they're "free". There's a legitimate question of whether it's costlier (along any given dimension or overall) to produce ethanol than to produce a solar panel which will generate the same amount of power over its useful life, and I don't know the answer, but please let's not extrapolate from the fact that plants sometimes grow unattended to the mistaken conclusion that corn has a negligible input cost.
Replies from: Annoyance↑ comment by Roko · 2009-06-17T14:35:17.237Z · LW(p) · GW(p)
But there are efficient ways to turn electricity into chemical energy, like a li-ion battery.
Best solar panel is at 50.7% efficiency as far as I know.
Plants also require energy to be produced. Solar panels harvest more energy on their lifetime than they take to produce, by a factor of about 10 I seem to recall.
↑ comment by timtyler · 2009-06-17T21:52:30.163Z · LW(p) · GW(p)
After getting the facts so totally wrong, you are supposed to remain in embarassed silence, not argue the toss with still more dubious claims:
http://en.wikipedia.org/wiki/Electrolysis_of_water#Efficiency
comment by CronoDAS · 2009-06-15T22:34:00.347Z · LW(p) · GW(p)
Interestingly, this may be actually happening. It's fairly clear that people today are taller than they once were...
Replies from: Rokocomment by PhilGoetz · 2009-06-16T18:02:32.456Z · LW(p) · GW(p)
My observations here on LW, thanks to the karma system, lead me to believe there is no threshold effect. People always have great difficulty following the ideas of someone a level above them, regardless of what level they are at. Eliezer's posts are so friggin' long because they are designed to be understood by people a level below him.
I suspect, as I've said repeatedly on LW, that increasing the baseline of intelligence would only lead us to construct a more elaborate society, with more complicated problems, and an even greater chance of catastrophic failure. So the question, if you're interested in decreasing existential risk, is whether we can reduce bias without raising the level of intelligence.
A study reported in Science about a month ago indicated that people in Sweden are more able to be laughed at than people of any other nation, while people in Africa and the Middle East are the least-able to stand being laughed at, and the least-trustful of smiling people. I'm intrigued by the idea that simple social conventions may be as effective in avoiding bad social results, as are better reasoning abilities. It may be better to use your reasoning ability to choose good social conventions, and promote those, than to promote rationality.
Replies from: Roko, loqi↑ comment by Roko · 2009-06-17T12:10:59.115Z · LW(p) · GW(p)
Do you think that current levels of intelligence are precisely optimal for reducing existential risks, where my definition of "existential risk" is the one given by Bostrom? What reason would there be for this remarkable co-incidence?
If not, then presumably you think we should start deliberately making people have lower IQ?
comment by hrishimittal · 2009-06-15T21:42:34.427Z · LW(p) · GW(p)
It's interesting speculation but it assumes that people use all of their current intelligence. There is still the problem of akrasia - a lot of people are perfectly capable of becoming 'smarter' if only they cared to think about things at all. Sure, they could still go mad infallibly but it would be better than not even trying.
Are you implying that more IQ may help in overcoming akrasia?
Replies from: Roko↑ comment by Roko · 2009-06-15T22:03:37.439Z · LW(p) · GW(p)
All other things being equal, increasing IQ will make people better at telling the difference between rational argument and sophistry, and at understanding marginally more complex arguments.
Decreasing akrasia for the general population is a different issue; the first thought that comes to mind is that increasing people's IQ with fixed motivation ought to improve things.
Replies from: hrishimittal, Vladimir_Nesov↑ comment by hrishimittal · 2009-06-16T00:53:29.903Z · LW(p) · GW(p)
Related post and discussion over at OB - http://www.overcomingbias.com/2009/06/lazy-hurt-less-than-stupid.html
↑ comment by Vladimir_Nesov · 2009-06-15T22:12:36.362Z · LW(p) · GW(p)
Not a sure thing. More intelligent population may get better at sophistry as well.
comment by taw · 2009-06-15T20:19:02.750Z · LW(p) · GW(p)
if we could improve the intelligence of the average voter by 10 IQ points, imagine how much saner the political process would look
It's highly non-obvious that it would have significant effects. Political process is imperfect but very pragmatic - what makes a lot of sense as there's only as much good an improved political process can do, and breaking it can cause horrible suffering. So current approach of gradual tweaks is a very safe alternative, even if it offends people's idealistic sensibilities.
Replies from: MichaelBishop, Arenamontanus, ChrisHibbert, asciilifeform↑ comment by Mike Bishop (MichaelBishop) · 2009-06-16T01:11:22.129Z · LW(p) · GW(p)
IQ is positively correlated with sharing economists' views on economic policy Therefore it seems likely people would vote differently and that this would translate into policy changes. I would expect other belief changes would be likely if IQ were increased.
↑ comment by Arenamontanus · 2009-06-16T18:01:50.381Z · LW(p) · GW(p)
Here is a simple model. Assume you need a certain intelligence to understand a crucial, policy-affecting idea (we can make this a fuzzy border and talk about this in distribution to make it more realistic later). If you are below this level your policy choices will depend on taking up plausible-sounding arguments from others, but it will be uncorrelated to the truth. Left alone such a population will describe some form of random walk with amplification, ending up with a random decision. If you are above the critical level your views will be somewhat correlated with the truth. Since you are affecting others when you engage in political discourse, whether over the breakfast table or on TV, you will have an impact on other people, increasing their chance of agreeing with you. This biases the random walk of public opinion slightly in favour of truth.
From most models of political agreement formation I have seen, even a pretty small minority that get biased in a certain direction can sway a large group that just picks views based on neighbours. This would suggest that increasing the set of people smart enough to get the truth would substantially increase the likelihood of a correct group decision.
↑ comment by ChrisHibbert · 2009-06-16T06:07:59.682Z · LW(p) · GW(p)
My rebuttal to
imagine how much saner the political process would look
is to point at the work of Tullock and Buchanon on Public Choice theory. Basically, the take away is that politicians and bureaucrats respond to incentives. If the voting public were smarter, politicians' behavior would be different during elections, and the politicians would try to make their behavior in office look different. But they would still have an incentive to look like they were addressing problems rather than an incentive to actually solve them. It's much harder than 10 IQ points to align those outcomes.
And bureaucrats and middle managers in government would still face the same incentives about obfuscating results, multiplying staffing and budgets, and ensuring that projects and bureaucracies have staying power.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-17T00:47:13.146Z · LW(p) · GW(p)
Public choice theory is important, but I still think there is good reason to believe increasing average IQ by such a huge amount would help. First, because better informed voters improves the incentives for politicians. Second, because the relatively bad incentives politicians face is not the only constraint on better goverment.
↑ comment by asciilifeform · 2009-06-15T21:23:56.067Z · LW(p) · GW(p)
It's highly non-obvious that it would have significant effects
The effects may well be profound if sufficiently increased intelligence will produce changes in an individual's values and goal system, as I suspect it might.
At the risk of "argument from fictional evidence", I would like to bring up Poul Anderson's Brain Wave, an exploration of this idea (among others.)
comment by steven0461 · 2009-06-15T20:14:17.969Z · LW(p) · GW(p)
More intelligence also means more competence at doing potentially world-destroying things, like AI/upload/nano/supervirus research. It does seem to me like the anti-risk effect from intelligence enhancement would somewhat outweigh the pro-risk effect, but I'm not sure.
Replies from: AngryParsley, Arenamontanus, gwern, HughRistik↑ comment by AngryParsley · 2009-06-15T22:48:46.281Z · LW(p) · GW(p)
If more intelligence is bad, is less good? Do you think current levels of intelligence are optimal? If so, that would be an amazing coincidence.
Replies from: Peter_de_Blanc, steven0461, Roko↑ comment by Peter_de_Blanc · 2009-06-16T04:57:38.092Z · LW(p) · GW(p)
I don't think that current levels of intelligence are optimal, but if they were, it wouldn't be a coincidence. Humans are adaptation-executers, and genes make implicit assumptions about their environment. In particular, certain adaptations might be disrupted by changing the average intelligence.
Replies from: AngryParsley, steven0461↑ comment by AngryParsley · 2009-06-16T09:49:38.222Z · LW(p) · GW(p)
If you had the option to increase your intelligence, would you decline because you were worried about certain adaptations being disrupted? The modern world is so different from our EEA that I can't buy your argument.
Disrupting adaptations can be a good thing. Birth control helps prevent overpopulation. Courts help settle disputes without violence. Even rational thought involves recognizing and changing (disrupting?) the typical thought patterns of our adapted brains.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-06-16T12:58:28.045Z · LW(p) · GW(p)
The fact that modern world changed our values in a way that ancient people won't appreciate on reflection is a bad thing for the ancient people. To us, it'd be bad if we reverted some of these changes, and likewise if we introduced new changes that have negative side effects from the current point of view (on reflection).
It's hard to "increase intelligence" without wreaking some of the values, brain isn't designed for upgrade. It's the same problem as with trying to change emotions.
Replies from: Roko↑ comment by Roko · 2009-06-16T13:15:25.083Z · LW(p) · GW(p)
It's hard to "increase intelligence" without wreaking some of the values
I don't think that +1 standard deviation in IQ would have this effect. I am not talking about turing people into superintelligent Jupiter brains, you know...
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-17T00:52:22.643Z · LW(p) · GW(p)
I definitely think that values, however defined, would change significantly with such a 10 point IQ change (btw, I consider this very large). And I think it would probably be a good thing.
↑ comment by steven0461 · 2009-06-16T05:00:30.739Z · LW(p) · GW(p)
I bet we're already smarter (in the ways relevant to this point) than we were in the ancestral environment, though.
↑ comment by steven0461 · 2009-06-16T04:37:25.423Z · LW(p) · GW(p)
From the upvotes it seems people think this is some sort of devastating counter-point. Yes, if more intelligence is bad, less is good. No, current levels of intelligence are not optimal.
If there's a knockdown argument against the idea that increased average intelligence might cause net increased risk, it should go something like "people will just do the same thing, but slower". I think this works but, again, I'm not sure. Either way, the net decrease in risks from intelligence enhancement is less than would seem to be the case if you considered just the upsides.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-19T15:38:44.711Z · LW(p) · GW(p)
Yes, if more intelligence is bad, less is good.
Fallacious argument.
Replies from: steven0461, Steve_Rayhawk, thomblake, Vladimir_Nesov↑ comment by steven0461 · 2009-06-19T17:05:08.542Z · LW(p) · GW(p)
So:
- AngryParsley is upset that I don't think that if more intelligence is bad, then less intelligence is good
- Annoyance is upset that I do think that if more intelligence is bad, then less intelligence is good
Could you please just get upset at each other?
Replies from: Annoyance↑ comment by Annoyance · 2009-06-20T16:48:53.704Z · LW(p) · GW(p)
No, Annoyance is upset that you presented a logical fallacy as an argument, that no one including you seems to care, and that the statement of your error is somehow seen as being antithetical to Less Wrong's purpose and mission.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-23T00:18:11.852Z · LW(p) · GW(p)
There was a post (I'm having trouble finding) with many examples of how "fallacies" can be reasonable forms of argument. e.g. Argument from authority is fallacious, but for many questions, it would be irrational to weight a child's opinion as heavily as an adult's.
Could someone provide the link? Annoyance, perhaps you could respond to it, because you seem very quick to point out fallacies and many in the community think it is sometimes unhelpful.
Replies from: Roko, thomblake, Annoyance↑ comment by thomblake · 2009-06-23T16:53:04.942Z · LW(p) · GW(p)
There's an argument from authority that's a fallacy, and one that's not.
Arguments of the form, "S is an authority on X and says p, so we have reason to think that p" can be valid (possibly missing some easy steps) but might be unsound.
Arguments of the form "S is an authority on X and says p, therefore p" are just plainly invalid.
There is no contradiction here.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-23T22:28:12.349Z · LW(p) · GW(p)
agreed, still looking for the link.
↑ comment by Annoyance · 2009-06-23T16:36:29.778Z · LW(p) · GW(p)
with many examples of how "fallacies" can be reasonable forms of argument.
I strongly suspect 'reasonable' is being used in the most common, and most erroneous, sense - that of "not striking the speaker as being unusual or producing cognitive dissonance".
Fallacies are, by their nature, invalid arguments. There are sometimes valid arguments related loosely to the content of certain fallacies, but they should be asserted rather than the invalid form.
many in the community think it is sometimes unhelpful.
(edit to alter content to what I now think is a better phrasing)
These individuals need to be publicly identified as irrationalists.
Replies from: thomblake, Vladimir_Nesov↑ comment by thomblake · 2009-06-23T16:57:26.129Z · LW(p) · GW(p)
These individuals need to be publicly identified as irrationalists.
Hey, I publicly identify myself as an irrationalist, and I have no problem calling a spade a spade.
That said, folks could easily think "Logical fallacy!" is about as helpful as "That comment had 25 characters!"
If you think people won't notice that there's a fallacy, then you should also think that they won't know what it is, and kindly point it out.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-06-23T22:27:30.546Z · LW(p) · GW(p)
how are you defining irrationalist? we are all, of course, imperfect rationalists.
Replies from: thomblake, Annoyance↑ comment by Annoyance · 2009-06-24T16:41:30.619Z · LW(p) · GW(p)
That said, folks could easily think "Logical fallacy!" is about as helpful as "That comment had 25 characters!"
Well, maybe - but then what are they doing here?
Replies from: thomblake↑ comment by thomblake · 2009-06-24T16:47:16.159Z · LW(p) · GW(p)
I think you're missing my point - we should be in 1 of 2 situations:
- the intended audience already knows there's a logical fallacy, so your statement communicates nothing
- the intended audience does not know there's a logical fallacy, so they also didn't identify what and where the logical fallacy is and you might as well be helpful and point it out.
↑ comment by Annoyance · 2009-06-24T17:34:28.828Z · LW(p) · GW(p)
Even people who know what the fallacy is won't necessarily notice it.
And people who didn't recognize the fallacy can still use logic to determine what it is - or rather, they should be able to.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-06-24T18:04:55.627Z · LW(p) · GW(p)
thomblake's first case refers to people actually noticing the instance of fallacy, not just being abstractly familiar with the kind. Are you twisting words on purpose, or are you actually failing to notice what was intended?
Replies from: thomblake↑ comment by thomblake · 2009-06-24T18:09:11.898Z · LW(p) · GW(p)
Annoyance was pointing out the third case, which I had suggested was unlikely - that one might not notice that the reasoning is fallacious, but can work it out once it's brought to one's attention. Presumably, such people are the intended audience of "Logical Fallacy!" and I could see how that might be helpful to them. I still think it would be much more helpful to point out the specific instance, with little more effort.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-24T19:24:04.542Z · LW(p) · GW(p)
I do see your point. However, if people can't work through a brief, simple written argument and analyze it for its logical content by themselves, they're really not ready to contribute.
Passing over a fallacy without recognizing it is something that a reasonable person might do inadvertently, or even because they want to accept the argument and so will tend not to notice. But someone who is incapable of working through and finding the flaw?
It's not as though I replied to a page-long comment "There's a word misspelled". There would be hundreds or thousands of words involved, and even a recognizable typo might take a long time to locate. A word that someone genuinely misspelled would probably prove evasive for a long time.
The logical content of such a comment would be much simpler - and few comments here are that complex.
↑ comment by Vladimir_Nesov · 2009-06-24T17:43:10.366Z · LW(p) · GW(p)
"This comment consists of 120 characters" is unhelpful even if nobody bothered to count and the given number is correct.
↑ comment by Vladimir_Nesov · 2009-06-23T17:53:11.880Z · LW(p) · GW(p)
Truth that doesn't pay its rent is poison.
↑ comment by Steve_Rayhawk · 2009-06-20T03:39:21.916Z · LW(p) · GW(p)
Related: The Reversal Test: Eliminating Status Quo Biases in Applied Ethics, by Nick Bostrom and Toby Ord.
Replies from: Annoyance, Vladimir_NesovThe great majority of those who judge increases to intelligence to be worse than the status quo would also judge decreases to be worse than the status quo. But this puts them in the rather odd position of maintaining that the net value for society provided by our current level of intelligence is at a local optimum, with small changes in either direction producing something worse. We can then ask for an explanation of why this should be thought to be so. If no sufficient reason is provided, our suspicion that the original judgment was influenced by status quo bias is corroborated.
[. . .]
The rationale of the Reversal Test is simple: if a continuous parameter admits of a wide range of possible values, only a tiny subset of which can be local optima, then it is prima facie implausible that the actual value of that parameter should just happen to be at one of these rare local optima [. . .] the burden of proof shifts to those who maintain that some actual parameter is at such a local optimum: they need to provide some good reason for supposing that it is so.
Obviously, the Reversal Test does not show that preferring the status quo is always unjustified. In many cases, it is possible to meet the challenge posed by the Reversal Test [. . .] Let us examine some of the possible ways [. . .]
The Argument from Evolutionary Adaptation [. . .]
The Argument from Transition Costs [. . .]
The Argument from Risk [. . .]
The Argument from Person-Affecting Ethics
↑ comment by Annoyance · 2009-06-20T16:47:43.080Z · LW(p) · GW(p)
The potential harms and benefits of intelligence depend partly on the nature of the system they exist in. Shift a system away from equilibrium, and harm will tend to predominate in the consequences until the system adapts. Adaptation takes time, and sometimes a great deal of learning.
We don't need to believe that we're at some kind of ultimate optimum to doubt whether a sudden change would be beneficial... as the arguments you mention suggest.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-06-20T16:54:58.741Z · LW(p) · GW(p)
In what you describe, the fact that the system is adapted in the current context is the reason the current context is a local optimum.
Replies from: Annoyance↑ comment by Annoyance · 2009-06-20T17:01:29.920Z · LW(p) · GW(p)
Yes, that's always true.
If one variable changes, the adaptations to the old value will no have the old effect - and they're far more likely to be harmful than beneficial.
Imagine that people became more truthful overnight. All of the societal factors that relied on people being deceptive will be thrown out of balance. Benefits predicated upon a certain level of deception will not longer occur, and rules designed to induce people to be truthful in certain situations (like court proceedings) may end up causing more harm than good.
Imagine if people actually took the oath "the truth, the whole truth, and nothing but the truth" seriously all of a sudden. Chaos, paralysis, and malfunctioning would be the immediate effects.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-06-20T17:13:29.549Z · LW(p) · GW(p)
On second thought, no, it's not always true. Both the improvement after the equilibrium is reached again, and change involving the confusion just after the change, are dependent on the extent of the change (and there need not be a sudden change, the whole process can be performed by shifting the state of adaptation). While state after-change is worse than state after-calming-down, both can be better than the initial state, or both worse than initial state, as well as on the different sides of the initial state.
Replies from: Annoyance↑ comment by Vladimir_Nesov · 2009-06-20T08:35:15.670Z · LW(p) · GW(p)
Thanks, I created an article on the wiki, citing the paper and using your quote:
↑ comment by thomblake · 2009-06-19T16:06:37.010Z · LW(p) · GW(p)
Well, that's not really an argument - it's more of a conditional statement.
But if you were to take it as an argument, then it's clearly skipping a step, but not a nonobvious one. It seems like you'd make more headway arguing that the argument (with implicit premise included) is unsound.
Clearly it's at least logically possible that an INT of 15 is good, while 14 or 16 are bad.
Replies from: Alicorn↑ comment by Alicorn · 2009-06-19T16:28:15.096Z · LW(p) · GW(p)
I like the D&D reference, but for all practical purposes therein, 14 and 15 are just alike, except with regards to how far an increase or decrease of a given size will get you. So while 15 could be good while 16 is bad, 14 could not be bad while 15 is good (unless 16 was also good, and 15 was better than 14 by virtue of it being easy to get from 15 to 16).
Replies from: thomblake, Annoyance↑ comment by thomblake · 2009-06-19T17:18:11.030Z · LW(p) · GW(p)
Regarding D&D, what Annoyance said.
I was just taking the numbers to be arbitrary, and didn't notice that I was making a D&D reference. ha.
And as long as we're talking D&D 3x, there's some virtue in odd-numbered stats. If someone hits you for 1 point of ability damage, you don't need to recalculate anything.
Replies from: Roko↑ comment by Roko · 2009-06-19T17:49:47.723Z · LW(p) · GW(p)
The pearls of less wrong wisdom astound me! When we've finished playing D&D, can we get back to preventing the end of the world, kids?
Replies from: Annoyance, Roko↑ comment by Roko · 2009-06-20T18:53:44.331Z · LW(p) · GW(p)
I think that this comment is in the running for most downvoted on LW!
Replies from: gwern↑ comment by gwern · 2009-06-20T20:05:17.908Z · LW(p) · GW(p)
No, not even close. To be in the running you'd need to be at <-30 or -40.
Replies from: Roko↑ comment by Roko · 2009-06-20T20:33:29.184Z · LW(p) · GW(p)
who the hell got downvoted to -40?!
Replies from: orthonormal↑ comment by orthonormal · 2009-06-20T23:23:08.562Z · LW(p) · GW(p)
Well, this self-sacrifice is currently at -30.
Replies from: Roko↑ comment by Vladimir_Nesov · 2009-06-19T16:09:35.000Z · LW(p) · GW(p)
It's impossible for this correction to be relevant.
↑ comment by Arenamontanus · 2009-06-16T18:12:06.168Z · LW(p) · GW(p)
More intelligence means bigger scope for action, and more ability to get desired outcomes. Whether more intelligence increases risk depends on the distribution of accidentally bad outcomes in the new scope (and how many old bad outcomes can be avoided), and whether people will do malign things. On average very few people seem to be malign, so the main issue is likely more the issue of new risks.
Looking at the great deliberate human-made disasters of the past suggests that they were often more of a systemic nature (societies allowing nasty people or social processes to run their course; e.g. democides and genocides) than due to individuals or groups successfully breaking rules (e.g. terrorism). This is actually a reason to support cognitive enhancement if it can produce more resilient societies less prone to systemic risks.
Replies from: steven0461↑ comment by steven0461 · 2009-06-16T20:18:58.395Z · LW(p) · GW(p)
One possibility I have in mind is if current rationalist ideas need a certain amount of time to slosh around and pervade the population before technology (fed by intelligence) grows enough for them to really start mattering.
↑ comment by gwern · 2009-06-17T03:39:14.403Z · LW(p) · GW(p)
It does seem to me like the anti-risk effect from intelligence enhancement would somewhat outweigh the pro-risk effect, but I'm not sure.
It is a hard subject to argue about. If I wanted to criticize you, I could say that higher IQs make uploads less economically attractive as they will start off with a smaller advantage, and higher IQs likely would make uploads intrinsically more difficult. And if I wanted to criticize my criticism, I could say that by making people in general smarter, we increase the cost of labor, which makes mechanical substitution ever more attractive (and mechanical substitution for reasonably complicated tasks implies development of AI/uploads).
↑ comment by HughRistik · 2009-06-15T23:48:10.739Z · LW(p) · GW(p)
Yes, increasing intelligence would increase the variance of quality of outcomes for humanity. And the hope is that intelligence would also increase the mean quality of outcome, such that the expected value would be higher.
comment by wuwei · 2009-06-16T02:15:22.032Z · LW(p) · GW(p)
I'm not sure intelligence enhancement alone is sufficient. It'd be better to first do rationality enhancement and then intelligence enhancement. Of course that's also much harder to implement but who said it would be easy?
It sounds like you think intelligence enhancement would result in rationality enhancement. I'm inclined to agree that there is a modest correlation but doubt that it's enough to warrant your conclusion.
Replies from: Roko↑ comment by Roko · 2009-06-16T13:37:05.321Z · LW(p) · GW(p)
It'd be better to first do rationality enhancement and then intelligence enhancement. Of course that's also much harder to implement but who said it would be easy?
All things considered, it seems that giving rationality training is much less likely to work than just telling people that if they take a pill it will make them smarter (and therefore richer).
Replies from: wuwei↑ comment by wuwei · 2009-06-16T23:11:03.783Z · LW(p) · GW(p)
I suspect you aren't sufficiently taking into account the magnitude of people's irrationality and the non-monotonicity of rationality's rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality's effects will be more careful and targeted -- and therefore more likely to work as existential risk mitigation.
Replies from: Roko, MichaelBishop↑ comment by Roko · 2009-06-17T00:03:52.593Z · LW(p) · GW(p)
I agree that a world where everyone had good critical thinking skills would be much safer. But getting there is super-tough. Learning is something most people HATE. Rationality - especially stuff involving probability theory, logic, statistics and some basic evolutionary science - requires IQ 100 as a basic prerequisite in my estimation.
I will discuss the ways we could get to a rational world, but this post is about merely a more intelligent world.
↑ comment by Mike Bishop (MichaelBishop) · 2009-06-17T01:44:27.714Z · LW(p) · GW(p)
...the non-monotonicity of rationality's rewards
Could you elaborate on the shape of the rewards to rationality?
Replies from: gwern, wuwei↑ comment by gwern · 2009-06-17T03:43:06.432Z · LW(p) · GW(p)
This was covered in some LW posts a while ago (which I cannot be arsed to look up and link); the paradigmatic example in those posts, I think, was a LWer who used to be a theist and have a theist girlfriend, but reading OB/LW stuff convinced him of the irrationality of God. Then his girlfriend left his hell-bound hide for greener pastures, and his life is in general poorer than when he started reading OB/LW and striving to be more rational.
The suggestion is that rationality/irrationality is like a U: you can be well-off as a bible-thumper, and well-off as a stone-cold Bayesian atheist, but the middle is unhappy.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-17T09:00:41.123Z · LW(p) · GW(p)
and his life is in general poorer
I'm not sure this is a fair statement. He did say he wouldn't go back if he had the choice.
comment by thomblake · 2009-06-15T20:02:15.564Z · LW(p) · GW(p)
if we could improve the intelligence of the average voter by 10 IQ points
Please do be careful with statements like this. If by 'average voter' you mean 'voter of average IQ', then what you propose is mathematically impossible.
Replies from: Roko, lavalamp, orthonormal, QuestionTime, thomblake↑ comment by Roko · 2009-06-15T20:20:42.139Z · LW(p) · GW(p)
then what you propose is mathematically impossible.
um, well ok, if you want to be a pedant about it, replace it with:
"if we could improve the intelligence of everyone in our society such that (scored on the current IQ scale) the average score was 110"
↑ comment by lavalamp · 2009-06-15T20:23:42.646Z · LW(p) · GW(p)
Why is that? Of course we all know that the average IQ can't be 10 points higher than the average IQ, but that would be a silly way to interpret what he said, and I can't think of any mathematical reason why the new average couldn't become, say, 110 instead of 100... (I can think of lots of practical reasons, though) (EDIT: and of course I mean on today's scale, as others have pointed out)
↑ comment by orthonormal · 2009-06-15T20:18:30.160Z · LW(p) · GW(p)
The charitable interpretation makes perfect sense: improve the intelligence of the electorate until their average score on the IQ test normalized for today would be 10 points greater than the norm.
↑ comment by QuestionTime · 2009-06-17T00:33:16.122Z · LW(p) · GW(p)
not helpful, but not worth negative ten points either. negative five at worst. upvoted.
↑ comment by thomblake · 2009-06-17T02:06:11.112Z · LW(p) · GW(p)
Was the massive downvote here because I didn't include enough information about how this is mathematically impossible, or because people don't agree that one should speak with precision, or because people don't like me pointing out that one should speak with precision?
Replies from: loqi, Alicorn↑ comment by Alicorn · 2009-06-17T03:47:24.415Z · LW(p) · GW(p)
I didn't downvote you, but my guess is that you were perceived to be impolite about your failure (deliberate or inadvertent) to get the point of the original remark. Neither missing the point nor being impolite tend to be looked on favorably.