Posts
Comments
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It's also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It's not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don't really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you're quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don't think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don't think Iran's government is trustworthy enough to deal with these risks. There aren't enough safeguards against unfriendly AI or incentives to develop friendly AI, but that's something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.
Well, ultimately, that was sort of the collective strategy the world used, wasn't it? (Not quite; a lot of low-level Nazis were pardoned after the war.)
And you can't ignore the collective action, now can you?
It's more a relative thing---"not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be".
If so, then we're actually more rational right? Because we're not biased against academia as most people are, and aren't biased toward academia as most academics are.
It's not quite so dire. You can't do experiments from home usually, but you can interpret experiments from home thanks to Internet publication of results. So a lot of theoretical work in almost every field can be done from outside academia.
otherwise we would see an occasional example of someone making a significant discovery outside academia.
Should we all place bets now that it will be Eliezer?
Negative selection may be good, actually, for the vast majority of people who are ultimately going to be mediocre.
It seems like it may hurt the occasional genius... but then again, there are a lot more people who think they are geniuses than really are geniuses.
In treating broken arms? Minimal difference.
In discovering new nanotechnology that will revolutionize the future of medicine? Literally all the difference in the world.
I think a lot of people don't like using percentiles because they are zero-sum: Exactly 25% of the class is in the top 25%, regardless of whether everyone in the class is brilliant or everyone in the class is an idiot.
Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.
This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.
I don't think it's quite true that "fail once, fail forever", but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn't seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)
I'm saying that the truth is not so horrifying that it will cause you to go into depression.
This is what I hope and desire to be true. But what I'm asking for here is evidence that this is the case, to counteract the evidence from depressive realism that would seem to say that no, actually the world is so terrible that depression is the only rational response.
What reason do we have to think that the world doesn't suck?
Politico, PolitiFact, FactCheck.org
The mutilation of male genitals in question is ridiculous in itself but hardly equivalent to the kind of mutilation done to female genitals.
Granted. Female mutilation is often far more severe.
But I think it's interesting that when the American Academy of Pediatrics proposed allowing female circumcision that really just was circumcision, i.e. cutting of the clitoral hood, people were still outraged. And so we see that even when the situation is made symmetrical, there persists what we can only call female privilege in this circumstance.
I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is... you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this "irrelevant" information actually does narrow my probability estimates just a little bit.
Not a lot. But we didn't say it was good evidence, just that it was, in fact, evidence.
(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that's not the same thing as having the actual thing.)
The statistical evidence is that liberalism, especially social liberalism, is positively correlated with intelligence. This does not prove that liberalism is correct; but it does provide some mild evidence in that direction.
It's a subtle matter, but... you clearly don't really mean determinism here, because you've said a hundred times before how the universe is ultimately deterministic even at the quantum level.
Maybe predictability is the word we want. Or maybe it's something else, like fairness or "moral non-neutrality"; it doesn't seem fair that Hitler could have that large an impact by himself, even though there's nothing remotely non-deterministic about that assertion.
Yes, think about how none of us would ever have discovered Less Wrong if we never fucked around on the Internet.
This is not to say that we don't fuck around on the Internet more than we should, which I think I probably do and I wouldn't be surprised if most of you do as well.
Not critical to your point, but I can't stand this habitual exchange:
But there's a lot of small habits in everything we do, that we don't really notice. Necessary habits. When someone asks you how you are, the habitual answer is 'Fine, thank you,' or something similar. It's what people expect. The entire greeting ritual is habitualness, to the point that if you disrupt the greeting, it throws people off.
When people ask how I am, I want to give them information. I want to tell them, "Actually I've had a bad headache all day; and I'm underemployed right now and really lonely." Or sometimes I'm feeling good, and I want to say "I feel great!" and have them actually know that I feel great and not think that I'm just carrying through the formula.
Human speech is one of the most valuable resources in the universe, and he were are wasting it on things that convey no information.
It's about ten times easier to become vegetarian than it is to reduce your consumption of meat. Becoming vegetarian means refusing meat every time no matter what, and you can pretty much manage that from day one. Reducing your meat consumption means somehow judging how much meat you're eating and coming up with an idea of how low you want it to go, and pretty soon you're just fudging all the figures and eating as much as you were anyway.
Likewise, I tried for a long time to "reduce my soda drinking" and could not achieve this. Now I have switched to "sucralose-based sodas only" and I've been able to do it remarkably well.
For the most part I agree with this post, but I am not convinced that this is true:
Anyone can develop any “character trait.” The requirement is simply enough years of thoughts becoming words becoming actions becoming habit.
A lot of measured traits are extremely stable over lifespan (IQ, conscientiousness, etc.) and seem very difficult, if not impossible, to train. So the idea that someone can just get smarter through practice does not appear to be supported by the evidence.
Your understanding is wrong: http://www.oecd.org/dataoecd/8/19/40937574.pdf
The answer should be obvious: Expected utility.
In practical terms, this means weighting according to severity, because the quantity of people affected is very close to equal. So we focus on the worst forms of oppression first, and then work our way up towards milder forms.
This in turn means that we should be focusing on genital mutilation and voting rights. (And things like Elevatorgate, for those of you who follow the atheist blogosphere, should obviously be on a far back burner.)
Because female circumcision is rare and illegal in developed nations?
There's obviously a female advantage here, at least in the Western world. Mutilating female genitals draws the appropriate outrage, while mutilating male genitals is ignored or even condoned. (I've seen people accused of "anti-Semitism" just for pointing out that male circumcision has virtually no actual medical benefits.)
Upvoted because it's a well-sourced and coherent argument.
Which is not to say that I agree with the conclusion. Okay, so there may be this effect of women being identified with their bodies.
But here's the thing: WE ARE OUR BODIES. We should be identifying with them, and if we're not, that's actually a very serious defect in our thinking (probably the defect that leads to such nonsense as dualism and religion).
Now, I guess you could say that maybe women are taught to care too much about physical appearance or something like that (they should care about other things as well, like intelligence, kindness, etc.). But a lot of feminists seem to be arguing that we should not care about how our bodies look at all, which is blatantly absurd.
Indeed, one thing that I know I have done wrong in my life and that other people have done to me to hurt me is to ignore my body. I have a tendency to think in terms of my mind and body being separate things, like my body is just a house my mind lives in. And then other people tend to treat me as some kind of asexual being that has transcended bodily form. The result is a very screwed-up body image and a lot of sexual frustration. On the definition you just gave, I am apparently under-objectified.
I'm not sure I would call it "oppression", but it's clearly true that heterosexual men are by far the MOST controlled by restrictive gender norms. It is straight men who are most intensely shoehorned into this concept of "masculinity" that may or may not suit them, and their status is severely downgraded if they deviate in any way.
If you doubt this, imagine a straight man wearing eye shadow and a mini-skirt. Compare to a straight woman wearing a tuxedo.
See the difference?
I've always found that recommendations of what to do are much more useful than any kind of praise, reward, punishment, or criticism.
On the other hand, if everyone told you how to do everything, you might never learn the very important skill of teaching yourself to do things.
If that's the case (and it seems like it is), then reinforcing yourself is going to be almost impossible, because you will by definition know the reinforcement script.
Everyone getting an A isn't reinforcement. Reinforcement has to be conditional on something. If you give everyone who writes a long paper an A, that's reinforcing writing long papers. If you give everyone who writes a well-written paper an A, that's reinforcing well-written papers (and probably more what you want to do).
But if you just give everyone an A, that may be positive, but it simply isn't reinforcement.
So you're saying you think that while maybe typically happy people are more irrational, it's still possible to be rational and happy.
I guess I agree with that. But sometimes I feel like I may just hope this is true, and not actually have good evidence for it.
Makes sense from the corporation's perspective. But also kinda sounds like moral hazard to me.
Well, maybe. Depending on how much it costs to do that experimental treatment, compared to other things we could do with those resources.
(Actually a large part of the problem with rising medical costs in the developed world right now is precisely due to heavier use of extraordinary experimental treatments.)
Often it clearly isn't; so don't do that sort of research.
Don't spend $200 million trying to determine if there are a prime number of green rocks in Texas.
Though that's actually illegal, so you'd have to include the chance of getting caught.
The trick is to be able to tell the difference.
And what a trick it is!
This is why I have decided not to be an entrepreneur. All the studies say that your odds are just not good enough to be worth it.
This makes perfect sense in terms of Bayesian reasoning. Unexpected evidence is much more powerful evidence that your model is defective.
If your model of the world predicted that the Catholic Church would never say this, well... your model is wrong in at leas that respect.
I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.
If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.
In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get "paradoxes" of self-reference that aren't really catastrophes.
For example, I cannot coherently and honestly assert this statement: "It is raining in Bangladesh but Patrick Julius does not believe that." The statement could in fact be true. It has often been true many times in the past. But I can't assert it, because I am part of it, and part of what it says is that I don't believe it, and hence can't assert it.
Likewise, Godel's theorems are a way of making number theory talk about itself and say things like "Number theory can't prove this statement"; well, of course it can't, because you made the statement about number theory proving things.
Well, some rather serious physicists have considered the idea: tachyons
But we know that he was unusual: He has a very high IQ. This by itself raises the probability of being a math crank (it also raises the probability of being a mathematician of course).
It's similar to how our LW!Harry Potter has increased chances of being both hero and Dark Lord.
Actually, perpetual motion using vacuum energy might really be feasible, since the vacuum energy keeps expanding itself... at present, it looks sort of like a loophole in the laws of nature.
On the other hand, quantum gravity may close this loophole.
I did exactly the same thing.
I also discovered shortly thereafter that I could force an n-coloring if I allowed discontinuous regions, which might seem trivial... except that real nations on real maps are sometimes discontinuous (Alaska, anyone?).
It looks like there's still some serious controversy on the issue.
But suppose for a moment that it's true: Suppose that depressed people really do have more accurate beliefs, and that this really is related to their depression.
What does this mean for rationality? Is it more rational to be delusional and happy or to be accurate and sad? Or can we show that even in light of this data there is a third option, to actually be accurate and happy?
Depressive realism is an incredibly, well, depressing fact about the world.
Is there something we're missing about it though? Is the world actually such that understanding it better makes you sad, or is it rather that for whatever reason sad people happen to be better at understanding the world?
And if it is in fact that understanding makes you sad... what does this mean for rationality?
Actually, realizing this parallel causes me to be even more dubious of the efficient market hypothesis.
As compelling as it may sound when you say it, this line or reasoning plainly doesn't work in scientific truth... so why should it work in finance?
Behavioral finance gives us plenty of reasons to think that whole markets can remain radically inefficient for long periods of time. What this means for the individual investor, I'm not sure. But what it means for the efficient market hypothesis? Death.
I think majoritarianism is ultimately opposed to tsuyoku naritai, because it prevents us from ever advancing beyond what the majority believes. We rely upon others to do knowledge innovation for us, waiting for the whole society to, for example, believe in evolution, or understand calculus, before we will do so.
Though he might change his mind as we explained how to cure a whole bunch of diseases he thought were intractable.
Actually I think I tend to do the opposite. I undervalue subgoals and then become unmotivated when I can't reach the ultimate goal directly.
E.g. I'm trying to get published. Book written, check. Query letters written, check. Queries sent to agents, check. All these are valuable subgoals. But they don't feel like progress, because I can't check off the book that says "book published".
I largely agree with you, but I think that there's something we as rationalists can realize about these disagreements, which helps us avoid many of the most mind-killing pitfalls.
You want to be right, not be perceived as right. What really matters, when the policies are made and people live and die, is who was actually right, not who people think is right. So the pressure to be right can be a good thing, if you leverage it properly into actually trying to get the truth. If you use it to dismiss and suppress everything that suggests you are wrong, that's not being right; it's being perceived as right, which is a totally different thing. (See also the Litany of Tarski.)
There is another way: Look really really hard with tools that would be expected to work. If you find something? Yay, your hypothesis is confirmed. If you don't? You'd better start doubting your hypothesis.
You already do this in many situations I'm sure. If someone said, "You have a million dollars!" and you looked in your pockets, your bank accounts, your stock accounts (if any), etc. and didn't find a million dollars in them (or collectively in all of them put together), you would be pretty well convinced that the million dollars you allegedly have doesn't exist. (In fact, depending on your current economic status you might have a very low prior in the first place; I know I would.)
That's a good point. And clearly court standards for evidence are not the same as Bayesian standards; in court lots of things don't count that should (like base rate probabilities), and some things count more than they should (like eyewitness testimony).