Posts

Comments

Comment by nykos on Offer: I'll match donations to the Against Malaria Foundation · 2013-02-11T22:15:13.080Z · LW · GW

Why not donate to people promoting neocolonialism, if you are really concerned about efficient malaria eradication and the well-being of Black people? I for one refuse to donate any amount of money to treat symptoms rather than causes, at least in in the case of strangers; it is an inefficient allocation of resources.

Comment by nykos on The value of Now. · 2013-02-07T19:49:25.487Z · LW · GW

If I were a scientist, I would ask for evidence of the existence of omega-level beings before further considering the questions. We can of course debate how many Omega-level beings are there on the tip of a pin, but I believe our limited time in this Universe is better spent asking different kinds of questions.

Comment by nykos on [SEQ RERUN] Eutopia is Scary · 2013-02-02T23:20:00.333Z · LW · GW

Maybe the forces of human nature make the future in some sense inevitable, conspiring to keep the long-term probability of eutopia very low?

If you took a freezing, dirty European peasant in winter ca. 1000 AD, and transported him to 0 AD Rome and its public thermae, he would also be heading towards eutopia - only in the 'wrong' direction of time. The worship of many gods in particular would probably strike him as horrifying.

If you transported Thomas Carlyle through time to the present, he would be horrified and disgusted, probably also frightened. But he would most definitely not be surprised. He would say: "I told you so". I'm sure there were at least few Romans who, when transported to Dark Ages Europe, would have said the same.

Comment by nykos on Politics Discussion Thread January 2013 · 2013-01-05T18:41:06.985Z · LW · GW

I do think Progressive like memes would have developed in a non-Christian descended implementation of what is often called The Cathedral

I think this is quite likely to be the case, since Progressivism (which one might think of as "altruism gone rampant") might actually emerge in time from the mating patterns and the resulting genetic structure of a population.

Comment by nykos on [LINK] Forty Years of String Theory: Reflecting on the Foundations · 2012-12-24T13:26:38.582Z · LW · GW

What are the experimental predictions of the various string theories?

Have any of those been experimentally verified so far?

Is belief in string theory paying any rent?

Comment by nykos on [LINK] Mastering Linear Algebra in 10 Days: Astounding Experiments in Ultra-Learning · 2012-10-31T17:55:12.200Z · LW · GW

What about individual IQ? It's not at all clear that learning methods yield uniform results across the bell curve. What might work for a 130+ IQ individual may not work for a 110 IQ individual - and vice-versa.

Comment by nykos on Parenting and Happiness · 2012-10-04T11:50:20.107Z · LW · GW

Intelligent people are more likely to think on the consequences when deciding to have a child. But there is a prisoner's dilemma type of situation here:

One reason smart people forego reproduction is because they might feel children make them more unhappy overall for at least the first few years (a not unreasonable assumption). Or simply because they are not religious (smart religious people do still have lots of children) As a consequence, in 20 years, the average IQ of that society will fall (bar some policy reversals encouraging eugenic breeding, or advances in genetic engineering), as only the less intelligent breed. Since, all other things equal, smarter people perform better on their jobs, the average quality of services provided in that society (both public and private) goes down. So in the end everyone becomes more unhappy (even though unhappiness of a childless smart person resulting from societal dysgenics may not outweigh the temporary unhappiness from having a child)

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-22T16:08:51.991Z · LW · GW

I do not understand how this has anything to do with FAI

It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

Are there any other current proposals to build AGI that don't start from the brain? From what I can tell, people don't even know where to begin with those.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

At some point you have to settle for "good enough" and "friendly enough". Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.

(like ensuring that value systems remain unchanged during self-modification)

But what if the AI is programmed with a faulty value system by its human creators?

Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Fair enough, I was giving it as an example because it is possible to implement now - at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-22T15:55:58.743Z · LW · GW

Is a "vegetative-state life-support cripple" a person at all?

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-14T19:26:46.858Z · LW · GW

I upvoted you for responding with a refutation and not simply downvoting.

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-14T19:16:43.779Z · LW · GW

OK, I got two minuses already, can't say I'm surprised because what I wrote is not politically correct, and probably some of you thought that I broke the "politics is the mind-killer" informal rule (which is not really rational if you happen to believe that the default political position - the one most likely to pass under the radar as non-mindkilling - is not static, but in fact is constantly shifting, usually in a leftwards direction).

For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.

The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).

If you downvote me, I have one request: I would at least like to read why.

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-14T18:35:56.147Z · LW · GW

I concede that, under some really extreme environmental conditions, any genetic advantages would be canceled out. So, you might actually be right if the IQ 80 mother is really bad. Money should be provided to poor families by the state, but only as long as they raise their child well - as determined by periodic medical checks. Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.

But I believe you are taking the importance of parenthood way too far, and disregarding the hereditarian point of view too easily. The blank-slate bias is something to be avoided. I would suggest you read this article by Matt Ridley.

Excerpt:

Today, a third of a century after the study began and with other studies of reunited twins having reached the same conclusion, the numbers are striking. Monozygotic twins raised apart are more similar in IQ (74%) than dizygotic (fraternal) twins raised together (60%) and much more than parent-children pairs (42%); half-siblings (31%); adoptive siblings (29%-34%); virtual twins, or similarly aged but unrelated children raised together (28%); adoptive parent-child pairs (19%) and cousins (15%). Nothing but genes can explain this hierarchy.

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-14T18:16:34.760Z · LW · GW

The society will be listening to its Einsteins and Feynmans once they band together and figure out how to use the dark arts to take control of the mass-media and universities away from their present owners and use them for their own, more enlightened goals. Or at least ingratiate themselves before the current rulers. They could promise to build new bombs or drones, for example. As for not being interested in solving FAI and these kinds of problems, that's really not a very convincing argument IMO. Throughout history, in societies of high average IQ and a culture tolerant of science, there was never a shortage of people curious about the world. Why wouldn't people with stratospheric IQ be curious about the world and enjoy the challenge of science, especially if they live in a brain-dead society which routinely engages in easy and boring trivialities? I mean, what would you choose between working on FAI or watching the Kardashians? I know what I would, even though my IQ is not very much above average and I'm really bad at probability problems.

There will never be a shortage of nerds and Asperger types out there, at least not for a long time, even with the current dysgenic trends.

Comment by nykos on Cynical explanations of FAI critics (including myself) · 2012-08-14T09:00:30.651Z · LW · GW

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.

Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that's the case.

*1. We humans don't have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.

Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template - instead of some stupid or deranged human), and he notably hasn't been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and... the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It's in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence - since intelligent people generally have less children than those on the left half of the Bell curve - while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn't be rich).

Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn't even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What's worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.

The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.

Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for "genius" embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).

I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).

EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn't fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.

Comment by nykos on A Marriage Ceremony for Aspiring Rationalists · 2012-07-30T17:24:53.153Z · LW · GW

Good luck explaining Bayes' law to people with IQs below 90.

Comment by nykos on A Marriage Ceremony for Aspiring Rationalists · 2012-07-30T17:15:45.546Z · LW · GW

Rationalism may not be heritable, but intelligence surely is.

Let's face it, LessWrong and rationalism in general appeal mostly to people with at least 1 SD above average IQ.

Comment by nykos on [deleted post] 2012-06-03T12:37:52.295Z

Given that the burden of proof regarding the equality of intelligence of human populations that have evolved in reproductive isolation from each other for thousands, if not tens of thousands, of years, and in radically different environments (of varying survival difficulty), lies with the egalitarians claiming that all human populations have the same intelligence distribution - I'd say that this article doesn't even belong on LessWrong.

What we need instead is either: a) An article explaining natural selection to those who don't understand where people who don't believe in human neurological uniformity are coming from; b) An article that proves that ALL biomes on planet Earth have had the exact same selection pressures for intelligence in modern H.sapiens throughout the past 100,000 years. Furthermore, unless you have a belief that Homo sapiens, Homo neanderthalensis and Denisovans had the exact same intelligence distribution, this article must prove that the 2-3% Neanderthal admixture in all non-Africans and the 5% Denisovan admixture in some Oceanians is not related to brain function and intelligence.

Sadly, we live in a world where human neurological uniformity is the null hypothesis even for people who should know better, given knowledge of evolution by natural selection.

Comment by nykos on Rationality Quotes May 2012 · 2012-05-03T12:21:41.571Z · LW · GW

More quotes by Mencius Moldbug:

When they say things like "in cognitive science, Bayesian reasoner is the technically precise codeword that we use to mean rational mind," they really do mean it. Move over, Aristotle!

Of course, in Catholicism, Catholic is the technically precise codeword that they use to mean rational mind. I am not a Catholic or even a Christian, but frankly, I think that if I had to vote for a dictator of the world and the only information I had was whether the candidate was an orthodox Bayesian or an orthodox Catholic, I'd go with the latter.

The only problem is that this little formula is not a complete, drop-in replacement for your brain. If a reservationist is skeptical of anything on God's green earth, it's people who want to replace his (or her) brain with a formula.

To make this more concrete, let's look at how fragile Bayesian inference is in the presence of an attacker who's filtering our event stream. By throwing off P(B), any undetected pattern of correlation can completely foul the whole system. If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian "rational mind" will conclude that the urn is entirely full of blue balls. And Bayesian inference certainly does not offer any suggestion that you should look at who's pulling balls out of the urn and see what he has up his sleeves. Once again, the problem is not that Bayesianism is untrue. The problem is that the human brain has a very limited capacity for analytic reasoning to begin with.

They are all from the article A Reservationist Epistemology

Comment by nykos on Rationality Quotes May 2012 · 2012-05-03T11:48:47.349Z · LW · GW

Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug's diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.

One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don't have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.

Comment by nykos on So You Want to Save the World · 2012-01-09T07:13:59.043Z · LW · GW

To achieve the Singularity in as fast a time as possible, we need not only money, but lots of smart, hard-working people (who will turn out to be mostly White and Asian males). The thing is, those traits are to a large part genetic; and we know that Ashkenazi Jews are smarter on average than other human groups. I am writing this at the risk of further inflating Eliezer's already massive ego :)

So, an obvious interim solution until we get to the point of enhancing our intelligence through artificial, non-genetic means (or inventing a Seed AI) is to popularize eugenics for intelligence and practice it. This should help improve our main deficiency, which is not the lack of money in my opinion, but the lack of brains. It is human intelligence augmentation, except that it can only work with NEW humans instead of existing ones (which is the Holy Grail we are aspiring to)

Of course, there is a catch: such an eugenics program would have to be kick-started by the current, rather dumb politicians and citizens - and the chances of them screwing things up are quite high, especially given the dubious associations with totally irrational beliefs like antisemitism that are bound to arise.

Unlike most of you, I'm skeptical about the Singularity being achieved in my lifetime. There have been no serious paradigm-shifts in the understanding of science lately, and the AI research seems to be progressing at a very slow pace. Meanwhile, Eliezer hasn't even started coding because he wants to explore the ramifications of Friendly AI. Fair enough, but I don't think he is smart enough to get it right philosophically, without an actual experiment for feedback. Aristotle famously got it wrong, by deducing that F = m*v using thought experiments and not bothering to check his philosophy against the real world.

So I think the best thing we can do right now is to convince people that intelligence and conscientiousness are real traits that are extremely desirable to any society (as opposed to how much money a person makes, as the libertarians would argue), that they can in principle be measured, and which are at least in part determined genetically, while making it clear that our end goal is to uplift everyone who wants it, to increase their intelligence to superhuman levels (a la "Rise of the Planet of the Apes") and to equalize the human races and populations in this regard.

Comment by nykos on The $125,000 Summer Singularity Challenge · 2011-08-06T01:59:36.868Z · LW · GW

If you really want to save lives, you better donate to people who do more than write papers. Aubrey de Grey's institute might be a better start.

The bottom line is, the SingInst is just a huge money drain. It really doesn't do anything useful, and all it ever produced is a bunch of papers. It actually does something worse, namely subsidizing a slacker-genius like Yudkowsky, who really should find better uses for his mind than armchair philosophy about "friendly AI" when we don't even have the knowledge to build an AI with the intelligence of a 10-year-old. Mr. Yudkowsky can actually build not one, but several intelligences greater than probably 95% of humans on the planet - all of them almost guaranteed to be friendly. He simply has to shave that ugly beard, stop being so nerdy and actually meet smart women just like him. His IQ is probably above the Ashkenazi already-high average, so having, say, 10 children and directing each on careers in every field that can potentially eliminate human aging and death will probably do more for humanity than endless philosophizing ever will.

Ditto for the rest of you who know you are smarter than the rest of humanity, but still allow mentally-challenged people to outbreed you, which results in decreasing the proportion of people on this planet whose brains can actually understand science and rationality.

Comment by nykos on The Best Textbooks on Every Subject · 2011-01-17T17:42:12.433Z · LW · GW

I think that it pays to be rationally ignorant. It is an economic fact that the more people specialize, the more they get paid and the chance of making a significant contribution in their particular field increases. You can't achieve your best in being a doctor if you spend valuable time reading textbooks about Western philosophy or quantum computing instead of reading textbooks about diseases. There is a saying capturing this thought: "jack of all trades and master of none". Sure, there are some fields such as AI at the intersection of many sciences - however, I doubt that most people on this blog (including me) are capable of handling that much information while producing new results in the field in a reasonable amount of time.

So, instead of reading the intro textbook of each field/science (I bet there are more such fields than anyone can handle in a normal, no-singularity lifespan), the best approach for me is to learn a little about each field in my free time - just enough so that I will not be ignorant to the point of making serious mistakes about the nature of reality, and sufficiently easy on the mind so that I maintain the processing power for the main work: digging as deep as possible into the field of my choice.

So, I disagree with the author and think that Teaching Company courses are more useful than textbooks... except for the textbooks pertaining to your chosen specialty.

There is a real danger in becoming more absorbed with the exploration of rationality and science than with focusing on, and excelling in, your own field. I myself am guilty of this.